Plenary · Plenary
Opening Plenary: Modeling for Energy Resilience: How DOE
Uses Simulation to Model and Manage Everything from the
Power Grid to the Strategic Petroleum Reserve
Chair: Bahar Biller (SAS Institute, Inc)
Modeling for Energy Resilience: How DOE Uses
Simulation to Model and Manage Everything from the
Power Grid to the Strategic Petroleum Reserve
Ann Dunkin (Department of Energy)
Abstract
Abstract
The U.S. Department of Energy’s
responsibilities run the gamut from managing the
nuclear stockpile and the strategic petroleum
reserve to running the power grid in 36 states
to performing basic and applied research to
protect national security, ensure stable power
sector operations and accelerate the clean
energy transition. Leveraging the power of
DOE’s computing infrastructure, including
the world’s fastest supercomputer,
simulation models are used to accelerate
advancements in nearly every field of research
across DOE. Through a series of examples
highlighting grid management, cybersecurity,
cavern modeling and fundamental physical
phenomena, this keynote will illuminate how DOE
applies modeling and simulation to both research
and operations.
pdf
Plenary · Plenary
Titans of Simulation: Resilience of Supply Chains and the
Role of Simulation
Chair: John Shortle (George Mason University)
Resilience of Supply Chains and the Role of
Simulation
John Fowler (Arizona State University)
Abstract
Abstract
Supply chain resilience refers to the capacity
of a supply chain to proactively prepare for
unforeseen events, effectively address
disruptions, and bounce back from them while
ensuring the sustained smooth operation of the
supply chain at the preferred level of
connectivity and management of its structure and
functions. Recent disruptive events including
the Covid-19 pandemic and the Russian invasion
of Ukraine have caused an increased emphasis on
supply chain resilience. In this presentation,
we discuss strategies to prepare for, address,
and bounce back from (potential) disruptions and
the role that simulation can play in enhancing
supply chain resilience.
pdf
Plenary · Plenary
Titans of Simulation: Ensuring Food Security under Climate
Change: How Simulation Can Help in Making Agricultural
Supply Chains More Resilient
Chair: John Shortle (George Mason University)
Ensuring Food Security under Climate Change: How
Simulation Can Help in Making Agricultural Supply
Chains More Resilient
Enver Yücesan (INSEAD)
Abstract
Abstract
Climate change and the resulting increased
frequency of unpredictable extreme weather
events create new operational challenges for the
commercial seed industry, which is a key pillar
of a sustainable and secure global food supply.
More specifically, extreme weather events
translate into two main effects on agricultural
production: Higher yield variability and lower
expected yields. In recent years, extreme
weather events already caused reductions in the
yields of cereals, maize, and other staple
crops. It is also projected that a warming of
+2C (+4C) would increase the coefficient of
variation of corn yield by 62% (192%) in six
countries that collectively account for 73% of
global production. In this presentation, we
first examine how the increased likelihood of
extreme weather events affects agricultural
supply chains in terms of R&D, production
planning, contracting, allocation, and storage
decisions. We then discuss the key challenges
associated with each stage and highlight how
simulation can help address them under increased
volatility.
pdf
Track Coordinator - Advanced Tutorials: Henry Lam (Columbia University), Giulia Pedrielli (Arizona
State University)
Tutorial · Advanced Tutorials
Screening Simulated Systems for Optimization
Chair: Eunhye Song (Georgia Institute of Technology)
Jinbo Zhao (Texas A&M University), Javier Gatica
(Pontificia Universidad Catolica de Chile), and David
Eckman (Texas A&M University)
Abstract
Abstract
Screening procedures for ranking and selection
have received less attention than selection
procedures, yet they serve as a cheap and
powerful tool for decision making under
uncertainty. Research on screening procedures
has been less active in recent years, just as
the advent of parallel computing has
dramatically reshaped how selection procedures
are designed and implemented. As a result,
screening procedures used in modern practice
continue to largely operate offline on fixed
data. In this tutorial, we provide an overview
of screening procedures with the goal of
clarifying the current state of research and
laying out opportunities for future development.
We discuss several guarantees delivered by
screening procedures and their role in different
decision-making settings and investigate their
impact on screening power and sampling
efficiency in numerical experiments. We also
study the implementation of screening procedures
in parallel computing environments and how they
can be combined with selection procedures.
pdf
Tutorial · Advanced Tutorials
Practical Impact and Academia Are Not Antonyms
Chair: Russell R. Barton (Pennsylvania State
University)
Shane Henderson (Cornell University)
Abstract
Abstract
This tutorial discusses principles and
strategies for the interplay between applied
work with organizations and an academic research
agenda. I emphasize lessons I have learned
through my own work and my own mistakes, with
special focus on some high-stakes settings,
including advising Cornell University’s
response to the COVID-19 pandemic and work with
the emergency services, among other
applications.
pdf
Tutorial · Advanced Tutorials
Statistical Limit Theorems in Distributionally Robust
Optimization
Chair: Henry Lam (Columbia University)
Jose Blanchet (Stanford University) and Alexander
Shapiro (Georgia Institute of Technology)
Abstract
Abstract
The goal of this paper is to develop a
methodology for the systematic analysis of
asymptotic statistical properties of data-driven
DRO formulations based on their corresponding
non-DRO counterparts. We illustrate our approach
in various settings, including both
phi-divergence and Wasserstein uncertainty sets.
Different types of asymptotic behaviors are
obtained depending on the rate at which the
uncertainty radius decreases to zero as a
function of the sample size and the geometry of
the uncertainty sets.
pdf
Tutorial · Advanced Tutorials
Digital Twins: Features, Models, and Services
Chair: Feng Ju (Arizona State University)
Andrea Matta (Politecnico di Milano, Via La Masa 1) and
Giovanni Lugaresi (KU Leuven)
Abstract
Abstract
This work provides an overview of digital twins,
digital replicas of real entities conceived to
support analysis, improvements, and optimal
decisions. Specifically, it aims to better
clarify what digital twins are by pointing out
their main features, what they can do to support
their related physical twins, and which models
they use. An illustrative example together with
a few selected application examples is used to
better describe digital twins. A discussion on
the actual challenges and research opportunities
is also reported.
pdf
Tutorial · Advanced Tutorials
Bootstrapping and Batching for Output Analysis
Chair: Sara Shashaani (North Carolina State University)
Raghu Pasupathy (Purdue University)
Abstract
Abstract
We review bootstrapping and batching as devices
for statistical inference in simulation output
analysis. Bootstrapping, discovered in the late
1970s and developed over the ensuing three
decades, is widely held as being among the
important scientific discoveries of the previous
century due primarily to its facility for
general statistical inference. By contrast,
batching was introduced in the 1960s but was
developed within the simulation community (in
the 1980s) for the narrower contexts of variance
parameter estimation and confidence interval
construction. In recent years, however, there
has been increasing realization that batching,
much like bootstrapping, can be used also for
general statistical inference, and that batching
often compares favorably with bootstrapping in
dependent data contexts. Bootstrapping and
batching have tremendous applicability for
uncertainty quantification in simulation, and
are prime candidates for adoption in simulation
software. We describe the general principles
underlying bootstrapping and batching, outline
guarantees, and discuss implementation.
pdf
Tutorial · Advanced Tutorials
Coarse-Grained Simulations of DNA and RNA Systems with
oxDNA and oxRNA Models: Tutorial
Chair: Wei Xie (Northeastern University)
Matthew Sample, Michael Matthies, and Petr Sulc (Arizona
State University)
Abstract
Abstract
We present a tutorial on setting-up the oxDNA
coarse-grained model for simulations of DNA and
RNA nanotechnology. The model is a popular tool
used both by theorists and experimentalists to
simulate nucleic acid systems both in biology
and nanotechnology settings. The tutorial is
aimed at new users asking "Where should I start
if I want to use oxDNA". We assume no prior
background in using the model. This tutorial
shows basic examples that can get a novice user
started with the model, and points the
prospective user towards additional reading and
online resources depending on which aspect of
the model they are interested in pursuing.
pdf
Tutorial · Advanced Tutorials
Importance Sampling Strategy for Heavy-tailed Systems with
Catastrophe Principle
Chair: Henry Lam (Columbia University)
Importance Sampling Strategy for Heavy-Tailed Systems
with Catastrophe Principle
Xingyu Wang and Chang-Han Rhee (Northwestern University)
Abstract
Abstract
Large deviations theory has a long history of
providing powerful machinery for designing
efficient rare-event simulation techniques.
However, traditional large deviations theory
fails to provide useful bounds in heavy-tailed
contexts, and designing efficient rare-event
simulation algorithms for heavy-tailed systems
has been considered challenging. Recent
developments in the theory of heavy-tailed large
deviations enable designing a strongly efficient
importance sampling scheme that is universally
applicable to a wide range of rare events. This
tutorial aims to provide an accessible overview
of the recent developments in the large
deviations theory for heavy-tailed stochastic
processes, which is followed by a detailed
account of the design principle behind the
strongly efficient importance sampling scheme
for such processes. The implementations of the
general principle are demonstrated through a few
specific heavy-tailed rare events that arise in
stochastic approximation, finance, and queueing
theory contexts.
pdf
Track Coordinator - Agent-Based Simulation: Andrew J. Collins (Old Dominion University), Chris Kuhlman
(University of Virginia)
Technical Session · Agent-based Simulation
Military and Homeland Security Agent-based Modeling
Chair: Berry Gerrits (University of Twente)
Squashing Bugs and Improving Design: Using Data
Farming to Support Verification and Validation of
Military Agent-Based Simulations
Susan K. Aros and Mary L. McDonald (Naval Postgraduate
School)
Abstract
Abstract
Verification and validation of complex
agent-based human behavior simulation models is
a challenging endeavor, particularly since a
dearth of real-world data makes it impossible to
use most traditional validation methods. Data
farming techniques have stepped up to the
challenge, proving to be a valuable tool for
verification and validation of complex models.
In this paper we demonstrate how data farming
and analysis aids in the verification and
validation of complex models by presenting
specific examples pertaining to WRENCH, an
agent-based simulation model that represents
complex interactions between security forces and
civilians during civil security stability
operations. We first provide an overview of data
farming and its relevance for verification and
validation of military agent-based simulation
models, then give an overview of WRENCH, and
finally demonstrate with examples how we have
used data farming to aid in the verification and
validation of WRENCH.
pdf
Beyond Accuracy: Cybersecurity Resilience Evaluation
of Intrusion Detection System against DoS Attacks
using Agent-based Simulation
Jeongkeun Shin, Geoffrey B. Dobson, L. Richard Carley,
and Kathleen M. Carley (Carnegie Mellon University)
Abstract
Abstract
Machine Learning has become increasingly popular
in developing Intrusion Detection Systems (IDS)
for cybersecurity. However, the focus has mainly
been on achieving high detection accuracy rather
than evaluating the impact on cybersecurity
resiliency. In this paper, we use agent-based
simulation to investigate the impact of
different IDS algorithms on the cybersecurity
resiliency of organizations under DoS attacks.
Our simulation includes a server agent equipped
with either Naive Bayes or SMO-based IDS, and a
cybercriminal agent capable of launching
different types of Denial of Service attacks.
Our results suggest that the choice of IDS
algorithm can significantly affect an
organization’s cybersecurity resiliency
against DoS attacks. Specifically, while SMO
shows better overall accuracy on the KDD Cup
1999 dataset, Naive Bayes-based IDS proves more
effective in practice due to its better-balanced
detection rates across different types of DoS
attacks. Our findings have important
implications for improving organizations’
cybersecurity posture.
pdf
Using Evolutionary Model Discovery to Develop Robust
Policies
Alex Isherwood, Matthew Koehler, and David Slater (MITRE
Corporation)
Abstract
Abstract
Agent-based models can be a powerful tool for
evaluating the impact of policy decisions on a
population. However, analyses are traditionally
beholden to one set of rules hypothesized at the
conception of the model. Modelers make
assumptions of agent behavior that are not
necessarily governed by data and the actual
behavior of the true population can vary.
Evolutionary Model Discovery provides a solution
to this problem by leveraging genetic algorithms
and genetic programming to explore the plausible
set of rules that can explain agent behavior.
Here we describe an initial use of the EMD
system to develop robust policies in a resource
constrained environment. In this instance, we
extend the NetLogo implementation of the Epstein
Rebellion model model of civil violence as a
sample problem. We use the EMD framework to
generate plausible populations and then develop
policy responses for the government that are
robust across the plausible populations.
pdf
Technical Session · Agent-based Simulation
Healthcare Agent-based Modeling
Chair: Xueying Liu (Virginia Polytechnic Institute and
State University)
An Iterative Analysis Method Using Causal Discovery
Algorithms to Enhance ABM as a Policy Tool
Shuang Chang, Takashi Kato, Yusuke Koyanagi, Kento
Uemura, and Koji Maruhashi (Fujitsu Laboratories Ltd.)
Abstract
Abstract
Agent-based modelling (ABM) is becoming a
popular policy tool by modelling the reasoning
processes and interactive behaviors of
individuals against external environments.
However, the presence of heterogeneous agents,
non-linear interactions and complex emergent
patterns raised by even simple behavior rules
pose challenges in the model explanation
process. In this work, we propose a novel
iterative analysis method that leverages causal
discovery algorithms to facilitate policy
formulation and evaluation based on a causal
understanding of the model. It strengthens the
explanation power of ABM by elucidating causal
relations among modelled components. We applied
the method to an agent-based simulator that
models passengers' routing behaviors in a
virtual airport terminal. By discovering the
causal relations among passengers' goals,
actions, and an airport terminal environment
under different COVID-19 regulations, we showed
that the method can inform more effective
indirect-control policies leading to positive
passenger experiences, compared with a
conventional ABM analysis method.
pdf
A Review of Agent-based Modeling Applications in
Substance Abuse Policy Research
Xiang Zhong (University of Florida), Xuanjing Li
(Tsinghua University), and Samantha Mangoni (University
of Florida)
Abstract
Abstract
This study provides a systematic review of
existing studies that used agent-based modeling
(ABM) to inform substance abuse policies and
identifies future research directions. The
detailed review included 20 articles, among
which, tobacco, alcohol, cannabis, opioids, and
heroin substance abuse were studied. These
studies examined substance abuse interventions
and the associations between substance use and
social behavior, such as peer interaction and
selection. Effective interventions included
retailer density reduction policies, restriction
of trading hours of licensed venues, ecstasy
pill-testing and passive-alert detection dogs by
police at public venues, and a mass-media drug
prevention education policy. ABM can capture the
dynamic interactions among and between agents
and environments, making it appropriate to model
complex substance abuse behaviors. Limitations
in current studies include a lack of ABM
validation efforts and generalizable data.
Future studies should use generalizable and
abundant information to inform their ABM, as
well as have an explicit validation method.
pdf
Supporting Emergency Department Risk Mitigation with
a Modular and Reusable Agent-Based Simulation
Infrastructure
Thomas Godfrey (King's College London); Rahul Batra, Sam
Douthwaite, and Jonathan Edgeworth (Guy's and St Thomas'
NHS Foundation Trust); Matthew Edwards (King's College
Hospital NHS Foundation Trust); Simon Miles (Aerogility
Ltd); and Steffen Zschaler (King's College London)
Abstract
Abstract
For emergency departments (EDs) to maintain
sustainable care of patients, hospital
management must continually explore potential
interventions to clinical practice. Agent-based
modelling (ABM) can be a valuable tool to
support this planning in a controlled
environment. Existing approaches to ABM
development are best suited for one-off models.
However, conditions in EDs can change
frequently, making the use of one-off models
infeasible. Decision-makers must be able to
trust simulations appropriately for them to be
effective in intervention exploration.
Domain-specific modelling languages (DSMLs) can
address these challenges by offering a reusable
library of appropriately-abstract,
domain-familiar, modelling concepts across case
studies and automatic translation of these
concepts into executable models. In this paper
we present a DSML to support repeated modelling
exercises in the ED domain and illustrate the
use and reuse of this DSML across two concrete
case studies in London-based NHS emergency
departments.
pdf
Technical Session · Agent-based Simulation
Sustainable Transportation Agent-based Modeling
Chair: Xiang Zhong (University of Florida)
Simulating Interaction Behaviors in Bi-directional
Shared Corridor with Real Case Study
Yun-Pang Flötteröd, Jakob Erdmann, and Daniel
Krajzewicz (German Aerospace Center (DLR)) and Johan
Olstam (The Swedish National Road and Transport Research
Institute)
Abstract
Abstract
Microscopic traffic simulation tools are able to
evaluate possible impacts induced by automated
shuttles under various conditions. However,
automated shuttles operate more and more often
in shared space areas and few microscopic
traffic simulation tools are able to handle
networks with shared space infrastructure.
Interaction behaviors between road users and
automated shuttles are addressed only seldom as
well. In this paper, we propose the concept of
bi-directional edges in the open source
microscopic traffic simulation suite SUMO to
simulate road users’ interactions in a
bi-directional shared-space corridor. A case
study, where automated shuttles and cyclists
share the bike path, and the related data
collection were conducted to examine the
performance of the proposed concept and
understand the usage of the shared corridor. The
simulation results are promising. Further
refinement of the proposed concept is planned
for properly reflecting complex interaction
behaviors among diverse road users, and their
surrounding environment.
pdf
Rebalancing Integrated, Demand-responsive Passenger
and Freight Transport – An Agent-based
Simulation Approach
Johannes Staritz, Julia Kütemeier, Helen Sand,
Christoph von Viebahn, and Maylin Wartenberg (Hochschule
Hannover)
Abstract
Abstract
Integrated, demand-responsive passenger and
freight transport (IDRT) potentially provides
flexibility and higher service frequency in
areas of low demand due to economies of scale,
while reducing negative traffic-related
externalities such as pollutant emissions, noise
emissions or accidents. However, to allow for
efficient operations in terms of minimum travel
distances, short customer waiting times, and
high vehicle utilization rates, IDRT requires
effective rebalancing strategies that balance
supply and demand capacities by strategically
positioning vehicle resources in the operational
area. Therefore, we propose a rebalancing
strategy for IDRT and measure its effectiveness
through an agent-based simulation model. To
evaluate our approach, we compare the rebalanced
IDRT with a static scenario with backhauls to a
central depot. Our results indicate that the
proposed rebalancing approach can outperform a
system without rebalancing by up to 15.1% in
terms of total fleet kilometers and 30% in terms
of passenger waiting time.
pdf
A Simulation Model for Bio-Inspired Charging
Strategies for Electric Vehicles in Industrial
Areas
Berry Gerrits and Martijn Mes (University of Twente) and
Robert Andringa (Distribute)
Abstract
Abstract
This paper presents an open-source agent-based
simulation model to study bio-inspired charging
policies for local sustainable energy systems in
an industrial setting where electric vehicles
(EVs) perform transportation jobs. Within this
context, we focus on a system that allows to
control the charging-schemes of individual EVs.
To this end, we develop an agent-based
simulation model in NetLogo. We present and
implement a bio-inspired approach based on the
foraging behavior of honeybees and our approach
results in simple, yet effective decision-making
logic. Our approach provides the necessary
parameters to control and balance sustainable
energy systems in terms of EV productivity and
the consumption of locally generated energy. Our
simulation results look promising: the balance
between EV productivity and the use of
sustainable energy can be efficiently tweaked in
a predictable manner using the parameters and
thresholds of the model, yielding
close-to-optimal performance.
pdf
Technical Session · Agent-based Simulation
Games and Agent-based Modeling
Chair: Haibei Zhu (J.P. Morgan)
Modeling Reactive Game Agents Using the Cell-DEVS
Modeling Formalism
Alvi Jawad, Cristina Ruiz-Martín, and Gabriel
Wainer (Carleton University)
Abstract
Abstract
Intelligent game agents are a vital part of
modern games as they add life, story, and
immersion to the game environment. The requests
in the gaming industry for more realism have
made intelligent agents more important than ever
before. Modeling and simulation of game agents
and their surrounding environment provide an
alternate setting to study dynamic agent
behavior before integration into the game
engine. The Cell-DEVS formalism, an extension of
Cellular Automata, allows modeling such
behaviors using the rigorously formalized
Discrete Event Systems Specification (DEVS)
formalism. In this paper, we explain how to
model and test reactive game agents using the
Cell-DEVS formalism and the CD++ toolkit. To
analyze the dynamic behavior of such agents, we
perform several experiments in varying system
configurations. Our experimental results confirm
the versatility of Cell-DEVS and the
functionalities in the CD++ toolkit to model
comfort-driven, exploratory, and desire-driven
game agents.
pdf
A Calibration Model for Bot-Like Behaviors in
Agent-Based Anagram Game Simulation
Xueying Liu, Zhihao Hu, and Xinwei Deng (Virginia Tech)
and Chris Kuhlman (University of Virginia)
Abstract
Abstract
Experiments that are games played among a
network of players are widely used to study
human behavior. Furthermore, bots or intelligent
systems can be used in these games to produce
contexts that elicit particular types of human
responses. Bot behaviors could be specified
solely based on experimental data. In this work,
we take a different perspective, called the
Probability Calibration (PC) approach, to
simulate networked group anagram games with
certain players having bot-like behaviors. The
proposed method starts with data-driven models
and calibrates in principled ways the parameters
that alter player behaviors. It can alter the
performance of each type of agent (e.g., bot) in
group anagram games. Further, statistical
methods are used to test whether the PC models
produce results that are statistically different
from those of the original models. Case studies
demonstrate the merits of the proposed method.
pdf
Feature Importance for Uncertainty Quantification in
Agent-based Modeling
Gayane Grigoryan and Andrew J. Collins (Old Dominion
University)
Abstract
Abstract
Simulation models are subject to uncertainty and
sensitivity, meaning that even small variations
of input can cause considerable fluctuations in
the output results. Consequently, this can
amplify the uncertainty associated with the
simulation, thereby limiting the confidence one
can have in its outcomes. To mitigate these
effects, this paper suggests using a cooperative
game theory-based feature importance method,
which can identify uncertainty in a dataset, and
provide additional insights that could be used
in the development or analysis of a simulation
model. A predator-prey scenario was considered,
demonstrating its usefulness in identifying
important parameters or features. By identifying
the most influential parameters or features,
this approach can help improve the accuracy,
explainability, and reliability of simulation
models as well as other models with highly
variable input parameters.
pdf
Technical Session · Agent-based Simulation
Transportation Agent-based Modeling
Chair: Kshama Dwarakanath (J.P. Morgan AI Research)
A Simulation-Based Method for Analyzing Supply Chain
Vulnerability Under Pandemic: A Special Focus on the
Covid-19
Xinglu Xu and Bochi Liu (Dalian University of
Technology) and Weihong Grace Guo (Rutgers, The State
University of New Jersey)
Abstract
Abstract
This paper develops a simulation-based
quantitative method to investigate the joint
impact of multiple risks on the supply chain
system during the pandemic. A hybrid simulation
method that combines the
susceptible-infected-recovered (SIR) model and
the agent-based simulation method is proposed to
simulate the risk propagation along the supply
chain and the interactions between distribution
centers and retailers. By analyzing the results
of scenarios with different interventions under
COVID-19, results show that the impact of
interventions is diminishing along the supply
chain. For intervention deployment, adding
testing capacity is of great importance. For
stakeholder management strategies, diversifying
the upstream partners is helpful. Against the
backdrop of a multi-wave global pandemic, this
paper takes the COVID-19 pandemic as an example
to provide a paradigm for modeling the risk
propagation in supply chain systems. Also, the
study demonstrates how to estimate possible
time-varying risk scenarios in face of the data
shortage challenge.
pdf
System Simulation and Machine Learning-Based
Maintenance Optimization for an Inland Waterway
Transportation System
Maryam Aghamohammadghasem, Jose Azucena, Farid
Hashemian, Haitao Liao, Shengfan Zhang, and Heather
Nachtmann (University of Arkansas)
Abstract
Abstract
To continue operations of the inland waterway
transportation system (IWTS), the interconnected
infrastructure, such as locks and dam systems,
must remain in good operating condition.
However, as the IWTS ages, unexpected
disruptions increase, causing significant
transportation delays and economic losses. To
evaluate the impacts of IWTS disruptions, a
Python-enhanced NetLogo simulation tool is
developed, where the extreme natural events are
considered and represented by a spatiotemporal
model. Utilizing this tool, optimal maintenance
strategies that maximize cargo throughput on the
IWTS are determined via deep reinforcement
learning. A case study of the lower Mississippi
River system and the McClellan-Kerr Arkansas
River Navigation System is conducted to
illustrate the capability of the developed
simulation and machine learning-based method for
IWTS maintenance optimization.
pdf
Four Years of Not-Using a Simulator: The Agent-Based
Template
Dominik Brunmeir and Martin Bicher (TU Wien); Matthias
Rößler, Christoph Urach, Claire Rippinger, and
Matthias Wastian (dwh GmbH); and Niki Popper (TU Wien)
Abstract
Abstract
With steadily increasing performance of
computers, agent-based modeling has evolved from
an analysis method for qualitative phenomena to
strategy for quantitative decision support. With
this orientation, however, the modeler faces new
challenges during implementation. In particular,
an appropriate simulation tool must feature the
combination of data and model flexibility,
process reproducibility, performance and
portability. While existing simulators often do
not sufficiently cover these features, it is
also not sustainable to generally implement
models from scratch. In this work, we want to
present the idea of simulation templates as a
compromise between the two strategies. We show,
on the example of our Agent-Based Template and
two use cases, the importance of the described
challenges and how the simulation template
concept supports solving them. We aim to
generally promote the idea of developing a
customized template, which, as a layer between
simulator and from-the-scratch implementation,
combines the advantages of both approaches.
pdf
Technical Session · Agent-based Simulation
Agent-based Modeling Design
Chair: Gayane Grigoryan (Old Dominion University)
Transparency as Delayed Observability in Multi-Agent
Systems
Kshama Dwarakanath and Svitlana Vyetrenko (J.P. Morgan
AI Research), Toks Oyebode (J.P. Morgan Regulatory
Affairs), and Tucker Balch (J.P. Morgan AI Research)
Abstract
Abstract
Is transparency always beneficial in complex
systems such as traffic networks and stock
markets? How is transparency defined in
multi-agent systems, and what is its optimal
degree at which social welfare is highest? We
take an agent-based view to define transparency
(or its lacking) as delay in agent observability
of environment states, and utilize simulations
to analyze the impact of delay on social
welfare. To model the adaptation of agent
strategies with varying delays, we model agents
as learners maximizing the same objectives under
different delays in a simulated environment.
Focusing on two agent types - constrained and
unconstrained, we use multi-agent reinforcement
learning to evaluate the impact of delay on
agent outcomes and social welfare. Empirical
demonstration of our framework in simulated
financial markets shows opposing trends in
outcomes of the constrained and unconstrained
agents with delay, with an optimal partial
transparency regime at which social welfare is
maximal.
pdf
Once Burned, Twice Shy? The Effect of Stock Market
Bubbles on Traders that Learn by Experience
Haibei Zhu and Svitlana Vyetrenko (J.P. Morgan), Serafin
Grundl (Federal Reserve Board), David Byrd (Bowdoin
College), and Kshama Dwarakanath and Tucker Balch (J.P.
Morgan)
Abstract
Abstract
We study how experience with asset price bubbles
changes the trading strategies of reinforcement
learning (RL) traders and ask whether the change
in trading strategies helps to prevent future
bubbles. We train the RL traders in a
multi-agent market simulation platform, ABIDES,
and compare the strategies of traders trained
with and without bubble experience. We find that
RL traders without bubble experience behave like
short-term momentum traders, whereas traders
with bubble experience behave like value
traders. Therefore, RL traders without bubble
experience amplify bubbles, whereas RL traders
with bubble experience tend to suppress and
sometimes prevent them. This finding suggests
that learning from experience is a mechanism for
a boom and bust cycle where the experience of a
collapsing bubble makes future bubbles less
likely for a period of time until the memory
fades and bubbles become more likely to form
again.
pdf
Matchmaking in Crowd-shipping Platforms: The Effects
of Mediator Control
Preetam Kulkarni and Caroline C. Krejci (University of
Texas at Arlington)
Abstract
Abstract
A critical design decision for crowdsourcing
platforms is the degree to which the platform
mediator controls participant interactions.
Platforms having a centralized model of
mediation optimize for convenience, speed, and
security in participant interactions, while
platforms operating under decentralized control
require greater user effort but offer them
greater control and agency. The research
described in this paper is a preliminary study
using agent-based modeling to evaluate and
compare the performance of crowd-shipping
platforms with centralized/decentralized control
over matchmaking of carriers and senders.
Results indicate that centralized matchmaking
protects the platform from premature failure
when initial carrier/sender participation is
low. Furthermore, when the platform’s
assignment algorithm is designed to maximize
platform revenue, subject to meeting
carriers’ profit expectations, centralized
matchmaking will tend to outperform
decentralized matchmaking for both the mediator
and the carriers.
pdf
Track Coordinator - Analysis Methodology: Ben Feng (University of Waterloo), Sara Shashaani (North
Carolina State University)
Technical Session · Analysis Methodology
Simulation in Queueing Systems
Chair: Jun Luo (Shanghai Jiao Tong University)
Real-Time Estimations for the Waiting-Time
Distribution in Time-Varying Queues
Kurtis Konrad and Yunan Liu (North Carolina State
University)
Abstract
Abstract
Customers’ waiting times are the most
commonly used performance data to measure the
quality of service in service systems such as
call centers and healthcare. Unlike stationary
queueing models where customers’ waiting
times are statistically similar, the prediction
of waiting times is far less straightforward in
time-varying queues having nonstationary demand
(i.e., arrival rate) and supply (i.e., number of
servers). In this paper, we develop a novel
methodology for more accurately computing the
wait time distribution in a time-varying
queueing system. We design extensive simulation
experiments to evaluate our prediction methods.
In addition, we discover that the waiting-time
prediction is highly sensitive to the
work-releasing policy of the staffing plan,
i.e., the rule under which the number of servers
changes in time.
pdf
Achieving Stable Service-Level Targets in
Time-Varying Queueing Systems: A Simulation-Based
Offline Learning Staffing Algorithm
Kurtis Konrad and Yunan Liu (North Carolina State
University)
Abstract
Abstract
In this paper, we develop a new staffing
algorithm for achieving stable service-level
targets in queues with time-varying arrivals.
Specifically, we aim to stabilize the tail
probability of delay, which is the probability
that the waiting time exceeds a designated
target τ > 0. We integrate reinforcement
learning into the decision making in queueing
models; our new method recursively evolve the
staffing decision by alternating between two
phases: (i) we generate simulated queueing data
by operating the system under the present
staffing function (exploration), and (ii) we
utilize the newly generated data to devise
improved staffing decision (exploitation). We
demonstrate the effectiveness of our new method
using various numerical examples.
pdf
Estimating Spline-based Nonhomogeneous Poisson
Intensities Using Constrained Quadratic
Programming
Siqi Chen, Jing Yang (Sunny) Xi, and Wai Kin (Victor)
Chan (Tsinghua-Berkeley Shenzhen Institute, Shenzhen
International Graduate School, Tsinghua University)
Abstract
Abstract
This paper estimates the intensity function of a
nonhomogeneous Poisson process (NHPP) using a
spline-based method with constrained quadratic
programming (CQP). Based on the property of
B-splines, we transform the estimation problem
into an optimization problem and apply CQP to
obtain the estimated intensity function with low
computational expense. Numerical experiments are
conducted to verify the performance of our
method. In addition, the impacts of the number
of intervals from event-count data and the
number of knots in B-splines are also discussed
to explore the properties of spline-based
models.
pdf
Technical Session · Analysis Methodology
Advances in Rare-event Simulation
Chair: Linyun He (Georgia Institute of Technology)
Efficiency of Estimating Functions of Means in
Rare-Event Contexts
Marvin Nakayama (New Jersey Institute of Technology) and
Bruno Tuffin (INRIA, University of Rennes)
Abstract
Abstract
When estimating a function of means, where some
but not necessarily all of them correspond to
rare events, we provide conditions under which
having efficient estimators of each individual
mean leads to an efficient estimator of the
function of the means. We illustrate this
setting through several examples, and numerical
results complement the theory.
pdf
Conditional Importance Sampling for Convex Rare-Event
Sets
Dohyun Ahn and Lewen Zheng (The Chinese University of
Hong Kong)
Abstract
Abstract
This paper studies the efficient estimation of
expectations defined on convex rare-event sets
using importance sampling. Classical importance
sampling methods often neglect the geometry of
the target set, resulting in a significant
number of samples falling outside the target
set. This can lead to an increase in the
relative error of the estimator as the target
event becomes rarer. To address this issue, we
develop a conditional importance sampling scheme
that achieves bounded relative error by changing
the sampling distribution to ensure that a
majority of samples lie inside the target set.
The proposed method is easy to implement and
significantly outperforms the existing
approaches in various numerical experiments.
pdf
Curse of Dimensionality in Rare-Event
Simulation
Best Contributed Theoretical Paper - Finalist
Yuanlu Bai, Antonius B. Dieker, and Henry Lam (Columbia
University)
Abstract
Abstract
In rare-event simulation, importance sampling
(IS) is widely used to improve the efficiency of
probability estimation. Asymptotic optimality is
a common efficiency criterion, which requires
that the relative error of the estimator only
grows subexponentially in the rarity parameter.
Most studies, however, consider low-dimensional
problems and the effect of dimensionality is
seldom analyzed. Motivated by recent AI-related
applications, we take a first step towards
high-dimensional rare-event simulation and
demonstrate that for very simple examples, IS
proposals that utilize exponential tilting,
arguably the most common IS approach, can suffer
from the "curse of dimensionality". That is,
while the growth rate of the relative error is
polynomial in the rarity parameter thus leading
to asymptotic optimality, the degree of the
polynomial depends on the problem
dimensionality. Therefore, when the dimension is
high, the relative error can be huge even in the
rarity parameter regime where IS is
conventionally believed to work well.
pdf
Technical Session · Analysis Methodology
Advances in Importance Sampling
Chair: Dohyun Ahn (The Chinese University of Hong Kong)
Efficient Input Uncertainty Quantification for
Regenerative Simulation
Best Contributed Theoretical Paper - Finalist
Linyun He (Georgia Institute of Technology), Mingbin Ben
Feng (University of Waterloo), and Eunhye Song (Georgia
Institute of Technology)
Abstract
Abstract
The initial bias in steady-state simulation can
be characterized as the bias of a ratio
estimator if the simulation model has a
regenerative structure. This work tackles input
uncertainty quantification for a regenerative
simulation model when its input distributions
are estimated from finite data. Our aim is to
construct a bootstrap-based confidence interval
(CI) for the true simulation output mean
performance that provides a correct coverage
with significantly less computational cost than
the traditional methods. Exploiting the
regenerative structure, we propose a $k$-nearest
neighbor ($k$NN) ratio estimator for the
steady-state performance measure at each set of
bootstrapped input models and construct a
bootstrap CI from the computed estimators.
Asymptotically optimal choices for $k$ and
bootstrap sample size are discussed. We further
improve the CI by combining the $k$NN and
likelihood ratio methods. We empirically compare
the efficiency of the proposed estimators with
the standard estimator using queueing examples.
pdf
Robust Importance Sampling for Stochastic Simulations
with Uncertain Parametric Input Model
Seung Min Baik and Young Myoung Ko (Pohang University of
Science and Technology (POSTECH)) and Eunshin Byon
(University of Michigan)
Abstract
Abstract
In stochastic simulations, input model
uncertainty may significantly impact output
estimation accuracy. Although variance reduction
techniques alleviate the computational burden,
input model uncertainty remains unaddressed.
Among several variance reduction techniques, we
propose a robust version of the importance
sampling method. We formulate a min-max
optimization problem for finding a robust
sampling density for simulation inputs
considering a parametric uncertainty set that
represents candidates of the true input
distribution. We utilize the Bayesian
optimization framework for solving the outer
problem and the barrier method for tackling the
inner problem. By incorporating input model
uncertainty in the sampling stage, our method
effectively allocates simulation effort to
improve estimation robustness. Numerical
experiments demonstrate the advantages of the
proposed method over a benchmark model assuming
a precisely known input model. Our approach
produces more accurate output estimation (i.e.,
an estimator with lower variance), highlighting
its robustness and potential applicability in a
variety of situations.
pdf
Generalized Importance Sampling for Nested
Simulation
Qingyuan Chen (Cornell University) and Mingbin Ben Feng
(University of Waterloo)
Abstract
Abstract
Importance sampling (IS) is a classical variance
reduction technique. Under mild conditions, an
IS estimator is unbiased, so one often seeks
variance-minimizing optimal sampling
distribution. IS has remarkable success in many
applications such as engineering, operations
research, and finance. In some applications such
as enterprise risk management and input
uncertainty quantification, complex simulation
designs such as nested simulation arises
naturally: The outer-level simulation generates
a set of risk factors, i.e., the scenarios,
which are used as inputs for inner-level
simulations. Nested simulation leads to wasteful
use of computations as inner simulation outputs
in each scenario are isolated from other
scenarios. In this study, we propose, analyze,
and test a generalized importance sampling
technique for nested simulation. Our generalized
IS approach reuses one set of inner simulation
outputs across different outer scenarios.
Numerical experiments show that our proposal is
orders of magnitudes more efficient than the
standard procedure.
pdf
Technical Session · Analysis Methodology
Output Analysis
Chair: Sara Shashaani (North Carolina State University)
Bootstrap Confidence Intervals for Simulation Output
Parameters
Russell R. Barton (The Pennsylvania State University)
and Luke A. Rhodes-Leader (Lancaster University)
Abstract
Abstract
Bootstrapping has been used to characterize the
impact on discrete-event simulation output
arising from input model uncertainty for thirty
years. The distribution of simulation output
statistics can be very non-normal, especially in
simulation of heavily loaded queueing systems,
and systems operating at a near optimal value of
the output measure. This paper presents issues
facing simulationists in using bootstrapping to
provide confidence intervals for parameters
related to the distribution of simulation output
statistics, and identifies appropriate
alternatives to the basic and percentile
bootstrap methods. Both input uncertainty and
ordinary output analysis settings are included.
pdf
Optimal Batching under Computation Budget
Shengyi He and Henry Lam (Columbia University)
Abstract
Abstract
Batching methods operate by dividing data into
batches and conducting inference by aggregating
estimates from batched data. These methods have
been used extensively in simulation output
analysis and, among other strengths, an
advantage is the light computation cost when
using a small number of batches. However, under
computation budget constraints, it is open to
our knowledge which batching approach among the
range of alternatives is statistically optimal,
which is important in guiding procedural
configuration. We show that standard batching,
but also certain carefully designed schemes
using uneven-size batches or overlapping
batches, are large-sample optimal in the sense
of so-called uniformly most accurate
unbiasedness from a dual view of hypothesis
testing.
pdf
Confidence Intervals for Randomized Quasi-Monte Carlo
Estimators
Pierre L'Ecuyer (Université de Montréal),
Marvin K. Nakayama (New Jersey Institute of Technology),
Art B. Owen (Stanford University), and Bruno Tuffin
(Inria)
Abstract
Abstract
Randomized Quasi-Monte Carlo (RQMC) methods
provide unbiased estimators whose variance often
converges at a faster rate than standard Monte
Carlo as a function of the sample size. However,
computing valid confidence intervals is
challenging because the observations from a
single randomization are dependent and the
central limit theorem does not ordinarily apply.
A natural solution is to replicate the RQMC
process independently a small number of times to
estimate the variance and use a standard
confidence interval based on a normal or Student
t distribution. We investigate the standard
Student t approach and two bootstrap methods for
getting nonparametic confidence intervals for
the mean using a modest number of replicates.
Our main conclusion is that intervals based on
the Student t distribution are more reliable
than even the bootstrap t method on the
integration problems arising from RQMC.
pdf
Technical Session · Analysis Methodology
Steady-state Simulation
Chair: David Goldsman (Georgia Institute of Technology)
A Fixed-Sample-Size Method for Estimating
Steady-State Quantiles
Athanasios Lolos, Christos Alexopoulos, and David
Goldsman (Georgia Institute of Technology); Kemal
Dinçer Dingeç (Gebze Technical University);
Anup C. Mokashi (Memorial Sloan Kettering Cancer
Center); and James R. Wilson (North Carolina State
University)
Abstract
Abstract
We propose FQUEST, a fully automated
fixed-sample-size procedure for computing
confidence intervals (CIs) for steady-state
quantiles. The user provides a
(simulation-generated) dataset of arbitrary size
and specifies the required quantile and nominal
coverage probability of the anticipated CI.
FQUEST incorporates the simulation analysis
methods of batching, standardized time series
(STS), and sectioning. Preliminary
experimentation with the waiting-time process in
a congested M/M/1 queueing system showed that
FQUEST performed well by delivering CIs with
estimated coverage probability close to the
nominal level, even in unfavorable circumstances
where the sample sizes were inadequate. In the
latter cases and for very small samples for
steady-state quantile estimation, the close
conformance of the CI coverage probability
typically came at the expense of loose CI
precision.
pdf
COSIMLA with General Regeneration Set to Compute
Markov Chain Stationary Expectations
Peter W. Glynn (Stanford University) and Zeyu Zheng
(University of California Berkeley)
Abstract
Abstract
We extend the COSIMLA approach (short for
"COmbined SIMulation and Linear Algebra'')
recently developed in Zheng, Infanger, and Glynn
(2022) to compute stationary expectations for
Markov chains with large or infinite discrete
state space. Our work follows the idea of
combing the best of linear algebra and
simulation---using linear algebra to compute the
"center'' of the state space and using
simulation to compute the contributions from
outside of the "center''. Different from Zheng,
Infanger, and Glynn (2022) that needed to fix a
single regeneration state, our work develops a
new method that allows the use of a flexible
regeneration set with a finite number of states.
We show that this new method allows more
efficient computation for the COSIMLA approach.
pdf
Fast Approximation to Discrete-Event Simulation of
Markovian Queueing Networks
Tan Wang (Fudan University), Yingda Song (Shanghai
Jiaotong University), and Jeff Hong (Fudan University)
Abstract
Abstract
Simulation of queueing networks is generally
carried out by discrete-event simulation (DES),
in which the simulation time is driven by the
occurrence of the next event. However, for
large-scale queueing networks, especially when
the network is very busy, keeping track of all
events is computationally inefficient. Moreover,
as the traditional DES is inherently sequential,
it is difficult to harness the capability of
parallel computing. In this paper, we propose a
parallel fast simulation approximation framework
for large-scale Markovian queueing networks,
where the simulation horizon is discretized into
small time intervals and the system state is
updated according to the events happening in
each time interval. The computational complexity
analysis demonstrates that our method is more
efficient for large-scale networks compared with
traditional DES. We also show its relative error
converges to zero. The experimental results show
that our framework can be much faster than the
state-of-the-art DES tools.
pdf
Technical Session · Analysis Methodology
Innovative Applications of Simulation Methodology
Chair: Hua Zheng (Northeastern University)
Structure-function Dynamics Hybrid Modeling: RNA
Degradation
Hua Zheng, Wei Xie, Paul C. Whitford, Ailun Wang,
Chunsheng Fang, and Wandi Xu (Northeastern University)
Abstract
Abstract
RNA structure and functional dynamics play
fundamental roles in controlling biological
systems. Molecular dynamics simulation, which
can characterize interactions at an atomistic
level, can advance the understanding on new drug
discovery, manufacturing, and delivery
mechanisms. However, it is computationally
unattainable to support the development of a
digital twin for enzymatic reaction network
mechanism learning, and end-to-end bioprocess
design and control. Thus, we create a hybrid
("mechanistic + machine learning") model
characterizing the interdependence of RNA
structure and functional dynamics from atomistic
to macroscopic levels. To assess the proposed
modeling strategy, we consider RNA degradation
which is a critical process in cellular biology
that affects gene expression. The empirical
study on RNA lifetime prediction demonstrates
the promising performance of the proposed
multi-scale bioprocess hybrid modeling strategy.
pdf
Tracking and Detecting Systematic Errors in Digital
Twins
Luke A. Rhodes-Leader (Lancaster University) and Barry
L. Nelson (Northwestern University)
Abstract
Abstract
Digital Twins (DTs) have immense promise for
exploiting the power of computer simulation to
control large-scale real-world systems. The key
idea is to evaluate or optimize decisions using
the DT, and then implement them in the
real-world system. Even with best practices, the
DT and the real-world system may become
misaligned over time. In this paper we provide a
statistical method to detect such misalignment
even though both the simulation and the
real-world system are inherently stochastic. An
empirical evaluation and a realistic
illustration are provided.
pdf
Sensitivity Analysis for Stopping Criteria with
Application to Organ Transplantations
Xingyu Ren, Michael Fu, and Steven Marcus (University of
Maryland)
Abstract
Abstract
We consider a stopping problem and its
application to the decision-making process
regarding the optimal timing of organ
transplantation for individual patients. At each
decision period, the patient state is inspected
and a decision is made whether to transplant. If
the organ is transplanted, the process
terminates; otherwise, the process continues
until a transplant happens or the patient dies.
Under suitable conditions, we show that there
exists a control limit optimal policy. We
propose a smoothed perturbation analysis (SPA)
estimator for the gradient of the total expected
discounted reward with respect to the control
limit. Moreover, we show that the SPA estimator
is asymptotically unbiased.
pdf
Technical Session · Analysis Methodology
Design of Experiments and Screening
Chair: Zeyu Zheng (University of California, Berkeley)
The Variability in Design Quality Measures for
Multiple Types of Space-filling Designs Created by
Leading Software Packages
Thomas W. Lucas (Naval Postgraduate School) and Jeffrey
D. Parker (United States Marine Corps)
Abstract
Abstract
Space-filling designs (SFDs) underpin many
large-scale simulation studies. The algorithms
that construct SFDs are mostly stochastic and
cannot guarantee that optimal solutions can be
found within a practical amount of time. This
paper uses massive experimentation to find the
empirical distributions of a diverse set of
design-quality measures in highly-used classes
of SFDs constructed by leading software
packages. The objective is to provide simulation
practitioners with a better understanding of
what they can expect from different SFD choices.
The results show substantial variability in
measures of correlation and space-fillingness in
the design classes and dimensions investigated.
Therefore, computer experimenters should
generate and assess several candidate designs
using different random-number-generator seeds to
reduce the risk of using a poor design simply
due to random chance. We also find that in the
largest designs investigated, the uniform
designs generally perform best for both our
correlation and uniformity measures.
pdf
Top-m Factor Screening for Stochastic Simulation:
Multi-Armed Bandit And Sequential Bifurcation
Combined
Wen Shi (Central South University), Hong Wan (North
Carolina State University), and Xiang Xie (Central South
University)
Abstract
Abstract
We propose a novel screening framework
(abbreviated to TopmSB) to identify the top m
key factors affecting the system performance.
Our framework builds on the standard SB
screening mechanism but incorporates an adaptive
multi-armed bandit (MAB) procedure in each stage
to prioritize the largest group. Compared to SB,
TopmSB avoids specifying perplexing
(un)importance threshold parameters, while
providing desired computational efficiency and
statistical precision guarantee. Numerical
experiments demonstrate the efficiency and
effectiveness of the proposed method.
pdf
Best Arm Identification with Fairness Constraints on
Subpopulations
Yuhang Wu, Zeyu Zheng, and Tingyu Zhu (University of
California, Berkeley)
Abstract
Abstract
We formulate, analyze and solve the problem of
best arm identification with fairness
constraints on subpopulations (BAICS). Standard
best arm identification problems aim at
selecting an arm that has the largest expected
reward where the expectation is taken over the
entire population. The BAICS problem requires
that a selected arm must be fair to all
subpopulations (e.g., different ethnic groups or
different types of customers) by satisfying
constraints that the expected reward conditional
on every subpopulation needs to be larger than
some thresholds. The BAICS problem aims at
correctly identify, with high confidence, the
arm with the largest expected reward from all
arms that satisfy subpopulation constraints. We
analyze the complexity of the BAICS problem by
proving a best achievable lower bound on the
sample complexity with closed-form
representation. We then design an algorithm and
prove the sample complexity to match with the
lower bound in terms of order.
pdf
Technical Session · Analysis Methodology
Analysis Uses in Optimization
Chair: Ilya Ryzhov (University of Maryland)
Efficient Bandwidth Selection for Kernel Density
Estimation
Haidong Li (University of Chinese Academy of Sciences),
Long Wang and Yijie Peng (Peking University), and Di
Wang (Shanghai Jiao Tong University)
Abstract
Abstract
We consider bandwidth selection for kernel
density estimation. The performance of kernel
density estimator heavily relies on the quality
of the bandwidth. In this paper, we propose an
efficient plug-in kernel density estimator which
first perturbs the bandwidth to estimate the
optimal bandwidth, followed by applying a kernel
density estimator with the estimated optimal
bandwidth. The proposed method utilizes the
zeroth-order information of kernel function and
has a faster convergence rate than other plug-in
methods in existing literature. Simulation
results demonstrate superior finite sample
performance and robustness of the proposed
method.
pdf
CGPT: A Conditional Gaussian Process Tree for
Grey-Box Bayesian Optimization
Mengrui (Mina) Jiang, Tanmay Khandait, and Giulia
Pedrielli (Arizona State University)
Abstract
Abstract
In black-box optimization problems, Bayesian
optimization algorithms are often applied by
generating inputs and measure values to discover
hidden structure and determine where to sample
sequentially. However, information about system
properties can be available. In different
learning tasks, we may know that the objective
is the minimum of functions, or a network. In
this paper we consider the case where the
structure of the objective function can be
encoded as a tree. We propose the new
Conditional Gaussian Process tree (CGPT) model
for "tree functions'' to embed the function
structure and improving the prediction power of
the Gaussian process. We utilize the
intermediate information at the tree nodes, to
formulate a novel likelihood for the estimation
of the CGPT parameters. We formulate the
learning and investigate the performance of the
proposed approach. Our study shows that CGPT
always outperforms a single Gaussian process
model.
pdf
Mean-Variance Portfolio Optimization with Nonlinear
Derivative Securities
Shiyu Wang and Guowei Cai (Lingnan College, Sun Yat-sen
University); Peiwen Yu (Soochow University); Guangwu Liu
(City University of Hong Kong); and Jun Luo (Shanghai
Jiao Tong University)
Abstract
Abstract
In this paper, we propose a simulation approach
to mean-variance optimization for portfolios
comprised of derivative securities. The key of
the proposed method is on the development of an
unbiased and consistent estimator of the
covariance matrix of asset returns which do not
admit closed-form formulas but require Monte
Carlo estimation, leading to a sample-based
optimization problem that is easy to solve. We
characterize the asymptotic properties of the
proposed covariance estimator, and the solution
to and the objective value of the sample-based
optimization problem. Performance of the
proposed approach is demonstrated via numerical
experiments.
pdf
Aviation Modeling and Analysis
Track Coordinator - Aviation Modeling and Analysis: Sameer Alam (Nanyang Technological University), Miguel
Mujica Mota (Amsterdam University of Applied Sciences),
Michael Schultz (Bundeswehr University Munich)
Technical Session · Aviation Modeling and Analysis
Airport and Airspace Operations
Tactical Minimization of the Environmental Impact of
Holding in the Terminal Airspace and an Associated
Economic Model
Aditya Paranjape and Anwesha Basu (Tata Consultancy
Services Ltd)
Abstract
Abstract
Minimization of the carbon footprint of aviation
is an active area of interest to the industry
and policy makers alike. Optimization of the
individual flight phases is an important step in
that direction. This paper considers the holding
phase, wherein aircraft hold in the terminal
airspace of airports prior to approach and
landing during times of busy operation or when
the arrival capacity is reduced due to factors
such as bad weather. We propose a tactical
method to allocate landing slots while
minimizing the environmental impact of holds. An
environmentally-driven policy can be perceived
as unfair, particularly by airlines whose
environmentally friendly aircraft which might
need to hold longer than they would under a fair
first-come-first-served policy. To alleviate
this challenge, we propose a number of economic
reward schemes, including one based on a linear
programming problem obtained by applying
complementary slackness to the dual of the
assignment problem.
pdf
Use of Variable Sized Entities to Model Airport
Passenger Flow with Pedestrian Dynamics
Erich Deines and Tanuj Babele (TransSolutions LLC) and
Gary Gardner (InControl)
Abstract
Abstract
This paper describes the use of variable-sized
entities within the framework of the InControl
simulation software product Pedestrian Dynamics
to rapidly model passenger flow and congestion
for a series of check-in hall lobby designs for
a US domestic airline terminal. Note that the
airline and airport will remain anonymous for
this presentation due to confidentiality.
pdf
Technical Session · Aviation Modeling and Analysis
Machine Learning Applications in Aviation
Chair: John Shortle (George Mason University)
Aircraft Line Maintenance Scheduling using Simulation
and Reinforcement Learning
Simon Widmer, Syed Shaukat, and Cheng-Lung Wu (UNSW)
Abstract
Abstract
This paper presents a reinforcement learning
(RL) algorithm prototype to solve the aircraft
line maintenance scheduling problem. The Line
Maintenance Scheduling Problem (LMSP) is
concerned with scheduling a set of maintenance
tasks during an aircraft's ground time. To
address this problem, we introduce a novel LMSP
method combining a hybrid simulation model and
reinforcement learning to schedule maintenance
tasks at multiple airports. Initially, this
paper briefly reviews the existing literature on
optimization-based and AI-enhanced aircraft
maintenance scheduling. Secondly, the novel
reinforcement learning LMSP method is
introduced, evaluated using industry data, and
compared with optimization-based LMSP solutions.
Our experiments demonstrate that the LMSP method
using reinforcement learning is capable of
identifying near-optimal policies for scheduling
line maintenance jobs when compared to the exact
and heuristics-based methods. The proposed model
provides an excellent foundation for future
studies on AI-enhanced scheduling problems.
pdf
Neural Networks for GNSS Matrix Attitude
Determination in Aerospace Transportation
Raul de Celis, Jose Gonzalez-Barroso, Pablo
Solano-Lopez, and Luis Cadarso (Rey Juan Carlos
University)
Abstract
Abstract
Accurate navigation and control of Aerial
Vehicles requires precise estimations of their
position and attitude. Measuring an aircraft's
rotation involves comparing two vectors in
different reference frames, such as inertial and
body axes. Typically, a GNSS sensor-based matrix
with at least three sensors is utilized for this
purpose, taking advantage of the carrier phase
measurements. However, factors such as
multipath, frequency lock loss, cycle slips, and
severe clock drifts can impede accurate integer
ambiguity resolution. To address these
challenges, a new neural network-based technique
has been developed to optimize the management of
large amounts of data and increase carrier phase
ambiguity resolution reliability. By using
carrier phase difference and pseudorange
information, various neural network
configurations can be trained to solve the
ambiguity and estimate the precise attitude of
the GNSS sensor matrix. The provided solution
can be used alone or hybridized with other
attitude sensor such as gyroscope information.
pdf
Complex and Resilient Systems
Track Coordinator - Complex and Resilient Systems: Saurabh Mittal (MITRE Corporation), Claudia Szabo (The
University of Adelaide, University of Adelaide)
Technical Session · Complex and Resilient Systems
Cyber Resilience in Complex Systems
Chair: Claudia Szabo (University of Adelaide, The
University of Adelaide)
A Mathematical Theory to Quantify Cyber-Resilience in
IT/OT Networks
Ranjan Pal (Massachusetts Institute of Technology),
Rohan Sequeira (University of Southern California), and
Michael Siegel (Massachusetts Institute of Technology)
Abstract
Abstract
Modern enterprise infrastructures (EIs)
including those of industrial control systems
(ICSs) are becoming increasingly crucial to
businesses in a wide range of sectors spanning
multiple end-user verticals (e.g., energy,
chemical, manufacturing, biotechnology). These
EIs improve the (real-time) decision support,
productivity, and efficiency of business
processes, but necessarily reliant upon the
cyber-resilience of complex infrastructures for
sustainable business continuity. We are
interested in the long-standing open question in
the cyber-resilience domain: how can managers
formally quantify cyber-resilience for any
complex networked EI (sub-)system in the event
of a cyber-attack affecting its multiple
(inter-dependent) components? We propose a
simulation-backed framework derived from
probabilistic graph theory to answer this
question. We pioneer the derivation and analysis
of a quantifiable, closed-form manager friendly
expression exhibiting the degree of
cyber-resilience (dependent upon individual EI
component functionality quality and the varying
extents of functional dependencies across
networked components) within the (sub-)system
post cyber-attack(s) affecting an EI.
pdf
Trustworthy Artificial Intelligence Framework for
Proactive Detection and Risk Explanation of Cyber
Attacks in Smart Grid
Shirajum Munir and Sachin Shetty (Old Dominion
University)
Abstract
Abstract
The rapid growth of distributed energy resources
(DERs), such as renewable energy sources,
generators, consumers, and prosumers in the
smart grid infrastructure, poses significant
cybersecurity and trust challenges to the grid
controller. Consequently, it is crucial to
identify adversarial tactics and measure the
strength of the attacker’s DER. To enable
a trustworthy smart grid controller, this work
investigates a trustworthy artificial
intelligence (AI) mechanism for proactive
identification and explanation of the cyber risk
caused by the control/status message of DERs.
Thus, proposing and developing a trustworthy AI
framework to facilitate the deployment of any AI
algorithms for detecting potential cyber threats
and analyzing root causes based on Shapley value
interpretation while dynamically quantifying the
risk of an attack based on Ward’s minimum
variance formula. The experiment with a
state-of-the-art dataset establishes the
proposed framework as a trustworthy AI by
fulfilling the capabilities of reliability,
fairness, explainability, transparency,
reproducibility, and accountability.
pdf
A Mathematical Theory to Price Cyber-Cat Bonds
Boosting IT/OT Security
Ranjan Pal (MIT Sloan School of Management) and
Bodhibrata Nag (Indian Institute of Management Calcutta)
Abstract
Abstract
The density of enterprise cyber (re-)insurance
markets to manage (aggregate) enterprise
cyber-risk has been low enough to realize their
potential to significantly improve
cyber-security and consequently the
cyber-reliability of (ICS) enterprise
ecosystems. In this paper, we propose the use of
catastrophic (CAT) bonds as a radical and
alternative residual cyber-risk management
methodology to alleviate the big supply demand
gap in the current cyber (re-)insurance
industry, by boosting capital injection in the
latter industry. Two important follow up
questions arise: (i) when is it feasible for
cyber (re-)insurers to invest in CAT bonds? and
(ii) how can we price cyber-CAT bonds
conditioned on the feasibility condition(s)? We
focus on answering the second question pivoted
upon an existential answer to the first. We
propose a novel practically motivated
information asymmetry (IA) driven cyber-CAT bond
pricing model, built upon theories of financial
stochastic processes and Monte Carlo
simulations, in realistic arbitraged incomplete
markets.
pdf
Technical Session · Complex and Resilient Systems
Panel: Resilience and Complexity in Socio-cyber-physical
Systems
Chair: Claudia Szabo (University of Adelaide, The
University of Adelaide)
Resilience and Complexity in Socio-Cyber-Physical
Systems
Claudia Szabo (University of Adelaide), Rodrigo Castro
(CIFASIS-CONICET), Joachim Denil (University of
Antwerp), and Susan M. Sanchez (Naval Postgraduate
School)
Abstract
Abstract
Socio-Cyber-Physical Systems are ubiquitous in
today’s world. They are inherently complex
systems built out of many large-scale systems
that encompass different perspectives and
numerous stakeholders. This leads to several
challenges in managing their complexity and
emergent behavior. In addition, these systems
tend to include many adaptive and autonomous
systems with different goals and different
adaptations to environment changes or failures.
The design, analysis, and testing of such
systems is inherently challenging but is
becoming critical due to their wide adoption. In
this panel, we aim to discuss some of these
challenges and potential solutions.
pdf
Technical Session · Complex and Resilient Systems
Panel: Using Simulation to Improve Trust and Autonomy
Adoption
Chair: Kelly Neville (MITRE Corporation)
The Use of Simulation to Improve Trust and Adoption
of Autonomy and AI in High-Consequence Work
Systems
Emily Barrett, Lisa Billman, Theresa Fersch, Valerie
Gawron, and Kelly Neville (MITRE Corporation); Emily
Patterson (The Ohio State University); and Eric Vorm
(Naval Air Warfare Center)
Abstract
Abstract
We assert that simulation should be an integral
part of technology development and acquisition.
Its use to iteratively evaluate new technology
across the development timeline can help ensure
technologies contribute to resilience in work
operations. This, in turn, benefits trust and
likelihood of adoption. Potential hindrances to
simulation in technology development are the
time and complexity simulation can introduce.
Time may be needed to model entities and
dynamics to be simulated, plan and conduct
simulation-based tests and experiments, and
translate the results into requirements, user
stories, or other inputs to the
technology’s design and implementation
plan. Complexity is increased when simulation
results suggest new or changed requirements,
identify technology design and implementation
improvements, or produce conflicting feedback
from potential users. We will discuss these
challenges, methods and tools that minimize
their disruptive effects, varieties of
simulation we have used to support technology
development, and benefits of using simulation in
development.
pdf
Technical Session · Complex and Resilient Systems
Resilient Enterprise and Services
Chair: Claudia Szabo (University of Adelaide, The
University of Adelaide)
Symbiotic Use of Digital Twin, Simulation and Design
Thinking Approach for Resilient Enterprise
Souvik Barat, Sylvan Lobo, Reshma Korabu, Himabindu
Thogaru, and Ravi Mahamuni (Tata Consultancy Services
Research)
Abstract
Abstract
Enterprises are increasingly facing the need to
be resilient in the face of uncertainty and
dynamism. Simulatable digital twins have become
critical aids for analyzing and adapting complex
systems. Design thinking and service design
methodologies, in contrast, are gaining momentum
for ideation, subjective evaluation, and
innovation. A systematic application of these
methodologies to explore innovative ideas and a
faithful virtual environment to test and
fine-tune those ideas without impacting real
systems could be transformational. This paper
presents an approach that establishes a
symbiotic relationship between these two
approaches to introduce precision and
innovativeness to make enterprises resilient. We
describe the key characteristics of resilient
enterprises, present our approach, and
illustrate its effectiveness with a case study
focusing on a transformation toward a new normal
to address the Covid-19 pandemic induced
disruptions in the IT industry.
pdf
Markov Process Simulations of Service Systems with
Concurrent Hawkes Service Interactions
Andrew Daw (University of Southern California) and Galit
B. Yom-Tov (Technion - Israel Institute of Technology)
Abstract
Abstract
In multi-tasked services such as in
messaging-based contact centers, parallel
service interactions share a mutual dependence
through the agent's concurrency. Here, we
introduce Markov process simulation methods for
bivariate Hawkes cluster service models that are
not Markovian by default due to their
concurrency dependence. To do so, we propose an
alternate construction that maintains extra
"shadow" variables for how the process would be
under other concurrency levels. We prove that
this construction yields an equivalent Markov
process, and we show through numerical
experiments that its corresponding simulation
algorithm is significantly more efficient than
the non-Markovian alternatives.
pdf
Stochastic Climate Simulation for Power Grid Net
Demand Risk Assessment
Rob Cirincione (Sunairio)
Abstract
Abstract
Power grid planners and power portfolio managers
are increasingly concerned with anticipating
“net demand” risks, which is defined
as customer demand minus renewables for a
particular time period. Net demand is a better
predictor of grid stress than peak demand in a
grid with significant renewables penetration.
For Holy Cross Energy, Sunairio simulated 1,000
probabilistic outcomes of hourly weather across
a geographic region that encompassed the
locations of customers and renewable energy
resources (wind, solar), for 15 years. The
hourly weather simulations were transformed to
hourly energy simulations of customer demand,
wind generation, and solar generation via
machine learning models, creating a broad,
climate-change-aware, coincident data set from
which to quantify concurrent risks to net
demand. Net demand paths of particular interest
for grid planning were curated via statistical
processing.
pdf
Technical Session · Complex and Resilient Systems
Handling Uncertainty in Complex and Resilient Systems
Chair: Souvik Barat (TCS)
Effects of Timing of Agents' Reactions in
Pharmaceutical Supply Chains under Disruption
Rozhin Doroudi, Ozlem Ergun, Jacqueline Griffin, and
Stacy Marsella (Northeastern University)
Abstract
Abstract
Disruptions in the supply chain network can have
significant and far-reaching consequences,
especially in pharmaceutical supply chains that
affect health and financial outcomes and raise
equity concerns. To inform strategies that can
address this critical global problem, we study
disruptions in pharmaceutical supply chains
using multiagent simulations. These simulations
include decision-theoretic agents with a theory
of mind reasoning that allows them to reason
about the other agents in the supply chain,
including their trustworthiness. The simulations
reveal how supplier-buyer interactions have
non-local effects which can exacerbate and
extend disruption impacts. In addition, a
distributor’s focus on its own short-term
profit can lower its long-term profit and damage
equity in healthcenters. We also demonstrate how
agents adapt to changes in the environment and
changes in other agents’ behavior and how
in the absence of explicit communication and
coordination, the timing of these adaptations
inhibits disruption mitigation efforts from
transpiring.
pdf
Model Predictive Control in Optimal Intervention of
COVID-19 with Mixed Epistemic-Aleatoric
Uncertainty
Jinming Wan, Saeideh Mirghorbani, N. Eva Wu, and
Changqing Cheng (Binghamton University)
Abstract
Abstract
Non-pharmaceutical interventions (NPI) have been
proven vital in the fight against the COVID-19
pandemic before the massive rollout of
vaccinations. Considering the inherent
epistemic-aleatoric uncertainty of parameters,
accurate simulation and modeling of the
interplay between the NPI and contagion dynamics
are critical to the optimal design of
intervention policies. We propose a modified
SIRD-MPC model that combines a modified
stochastic
Susceptible-Infected-Recovered-Deceased (SIRD)
compartment model with mixed epistemic-aleatoric
parameters and Model Predictive Control (MPC),
to develop robust NPI control policies to
contain the infection of the COVID-19 pandemic
with minimum economic impact. The simulation
result indicates that our proposed model can
significantly decrease the infection rate
compared to the practical results under the same
initial conditions.
pdf
Technical Session · Complex and Resilient Systems
Reliability in Power Systems
Chair: Jinming Wan (Binghamton University)
Cascading Transformer Failure Probability Model Under
Geomagnetic Disturbances
Pratishtha Shukla, James Nutaro, and Srikanth Yoginath
(Oak Ridge National Laboratory)
Abstract
Abstract
This paper develops a probabilistic model to
assess the cascading failure of transformers in
an electric power grid experiencing geomagnetic
disturbances caused by a solar storm. We propose
a model in which the probability of failure is a
function of the intensity of the solar storm,
the physical properties of the transformer, the
geographical location of the transformer, and
the flow of electrical power. We demonstrate the
proposed model using the IEEE 14-bus system and
several notional solar storms. The model quickly
computes the initial and cascading failure
probabilities of the transformers in the system
as a first step towards quantifying the risks
posed by future solar storms.
pdf
Impact of Salt-To-Steam Heat Exchanger Failure Rates
on Lifetime Production of Concentrating Solar Power
Tower Plants
Karoline Hood (US Army, Colorado School of Mines) and
Alex Zolan (National Renewable Energy Laboratory)
Abstract
Abstract
Heat exchangers in the steam generation system
(SGS) of concentrated solar power (CSP) plants
are unique in their functionality. Consequently,
equipment replacements have long lead times. A
typical CSP plant using an organic Rankine cycle
has one or two salt-to-steam trains (SSTs)
within the SGS. When one heat exchanger in the
SGS fails, the individual SGS fails. We use an
existing framework that combines simulation and
optimization models to assess the impacts of
irrecoverable failures on long-term production.
The methodology provides an optimized dispatch
with the integration of unplanned simulated
failures over a thirty-year period. Our work
shows a system of two trains provides resiliency
and reduces downtime of a plant by six to eight
times compared to a single train. The gross
revenue increases by 31% and 11% for single and
two trains, respectively, when the expected
lifetime increases from five to 10 years.
pdf
Data Science for Simulation
Track Coordinator - Data Science for Simulation: Abdolreza Abhari (Ryerson University), Hamdi Kavak (George
Mason University)
Technical Session · Data Science for Simulation
Machine Learning for Simulation
Chair: Hamdi Kavak (George Mason University)
Causal Dynamic Bayesian Networks for Simulation
Metamodeling
Best Contributed Theoretical Paper - Finalist
Pracheta Boddavaram Amaranath (University of
Massachusetts Amherst), Sam Witty (Basis Research
Institute), and Peter J. Haas and David Jensen
(University of Massachusetts Amherst)
Abstract
Abstract
A traditional metamodel for a discrete-event
simulation approximates a real-valued
performance measure as a function of the
input-parameter values. We introduce a novel
class of metamodels based on modular dynamic
Bayesian networks (MDBNs), a subclass of
probabilistic graphical models which can be used
to efficiently answer a rich class of
probabilistic and causal queries (PCQs). Such
queries represent the joint probability
distribution of the system state at multiple
time points, given observations of, and
interventions on, other state variables and
input parameters. This paper is a first
demonstration of how the extensive theory and
technology of causal graphical models can be
used to enhance simulation metamodeling. We
demonstrate this potential by showing how a
single MDBN for an M/M/1 queue can be learned
from simulation data and then be used to quickly
and accurately answer a variety of PCQs, most of
which are out-of-scope for existing metamodels.
pdf
Deep-learning-assisted Cardiac Electrophysiology
Simulation
Weixuan Dong, Yifu Li, and Rui Zhu (The University of
Oklahoma)
Abstract
Abstract
Simulation built upon partial and ordinary
differential equations has been a classic
approach to modeling cardiac
electrophysiological dynamics. However,
mitigating the computational burden of
differential equations is still a challenging
problem. This paper provides a novel alternative
utilizing data-driven recurrent neural networks
for cardiac electrophysiological dynamic
simulation. Specifically, we develop a long
short-term memory (LSTM)-assisted simulation to
capture the underlying dynamics of cardiac
electrophysiology while preserving computational
efficiency. Experimental results demonstrate the
efficiency and effectiveness of the proposed
method, which outperforms the differential
equation-based simulation approach while
significantly reducing the computational cost.
The proposed method offers a promising
alternative to traditional simulation and may
contribute to the development of more efficient
and accurate approaches for simulating cardiac
electrophysiology.
pdf
Inferring Epidemic Dynamics Using Gaussian Process
Emulation of Agent-Based Simulations
Abdulrahman Ahmed, M. Amin Rahimian, and Mark Roberts
(University of Pittsburgh)
Abstract
Abstract
Computational models help decision makers
understand epidemic dynamics to optimize public
health interventions. Agent-based simulation of
disease spread in synthetic populations allows
us to compare and contrast different effects
across identical populations or to investigate
the effect of interventions keeping every other
factor constant between "digital twins." FRED (A
Framework for Reconstructing Epidemiological
Dynamics) is an agent-based modeling system with
a geo-spatial perspective using a synthetic
population that is constructed based on the U.S.
Census data. In this paper, we show how Gaussian
process regression can be used on
FRED-synthesized data to infer the differing
spatial dispersion of the epidemic dynamics for
two disease conditions that start from the same
initial conditions and spread among identical
populations. Our results showcase the utility of
agent-based simulation frameworks such as FRED
for inferring differences between conditions
where controlling for all confounding factors
for such comparisons is next to impossible
without synthetic data.
pdf
Technical Session · Data Science for Simulation
Data Analytics for Simulation
Chair: Abdolreza Abhari (Toronto Metropolitan
University)
Autonomic Orchestration of In-Situ and In-Transit
Data Analytics for Simulation Studies
Xiaorui Du (Technical University of Munich); Adriano
Pimpini (Sapienza, University of Rome); Andrea Piccione
(Huawei Munich Research Center); Zhuoxiao Meng and
Anibal Siguenza-Torres (Technical University of Munich);
Stefano Bortoli (Huawei Munich Research Center); Alois
Knoll (Technical University of Munich); and Alessandro
Pellegrini (University of Rome Tor Vergata)
Abstract
Abstract
Modern parallel/distributed simulations can
produce large amounts of data. The historical
approach of performing analyses at the end of
the simulation is unlikely to cope with modern,
extremely large-scale analytics jobs. Indeed,
the I/O subsystem can quickly become the global
bottleneck. Similarly, processing on-the-fly the
data produced by simulations can significantly
impair the performance in terms of computational
capacity and network load. We present a
methodology and reference architecture for
constructing an autonomic control system to
determine at runtime the best placement for data
processing (on simulation nodes or a set of
external nodes). This allows for a good tradeoff
between the load on the simulation's critical
path and the data communication system. Our
preliminary experimentation shows that autonomic
orchestration is crucial to improve the global
performance of a data analysis system,
especially when the simulation node's rate of
data production varies during simulation.
pdf
Scaling Cross-Relations with Larger Dataset
Victor Diakov (Simfoni Ltd.)
Abstract
Abstract
Simulation and optimization of procurements
might employ clustering dataset elements to
exclude possible duplicates and improve
processing resiliency. This study presents a
case of applying scaling methods to reduce
computation time of clustering between a smaller
and a larger dataset. In this example (of
selecting close supplier names), computation
time scales as square of N (the number of
elements), and the presented approach in effect
brings computing time to be linear in N. As a
result, computation time in our case is reduced
by over an order of magnitude.
pdf
Uncovering Competitor Pricing Patterns in the Danish
Pharmaceutical Market via Subsequence Time Series
Clustering: A Case Study
Ruhollah Jamali (University of Southern Denmark) and
Sanja Lazarova-Molnar (Karlsruhe Institute of
Technology)
Abstract
Abstract
Adopting data-driven decision-making approaches
can significantly enhance profitability and
foster growth in economic situations through
quantitative analysis of market dynamics. One
intriguing market that warrants examination is
the price competition observed within the Danish
pharmaceutical sector, where numerous companies
are vying for a larger market share through the
offering of diverse pharmaceutical products.
This paper aims to shed light on this market by
employing subsequence time series clustering
techniques to identify pricing patterns among
the players involved in the Danish
pharmaceutical industry. The data analysis
pipeline performed in this study allows for the
identification of price patterns for clustering
and discovering different agent groups, as well
as providing a foundation for expanding the
current agent-based model of the European
pharmaceutical parallel trade market by
analyzing the pricing behavior and patterns of
players, facilitating the utilization of
historical data to model agent behavior and
advancing research in this area.
pdf
Technical Session · Data Science for Simulation
Simulation in Action
Chair: Hamdi Kavak (George Mason University)
A Preliminary Study of Regularization Framework for
Constructing Task-Specific Simulators
Dilara Aykanat (University of California, Berkeley);
Nian Si (The University of Chicago); and Zeyu Zheng
(University of California, Berkeley)
Abstract
Abstract
One approach to construct or calibrate
simulators, when representative real data exist,
is to ensure that the synthetic data generated
by the simulated match the empirical
distribution of the real data. However, such
approach to construct simulators does not take
into consideration where the constructed
simulators will be used. For some applications,
there are clear tasks (such as performance
evaluation of different decisions) in
users’ mind where the simulated data will
serve as input to the tasks. In this work, we
propose an approach to use the knowledge of
these tasks to guide the construction of
simulators, in addition to the distribution
match of simulated data and real data by
regularizing the objective function with a task
related penalty. We conduct a preliminary
numerical study of this approach to illustrate
the effectiveness compared to not taking into
consideration the specific tasks of the
simulators.
pdf
Using Simulation to Assess the Reliability of
Forecasts in High-tech Industry
Bhoomica Mysore Nataraja (Eindhoven University of
Technology); Tanmay Aggarwal (Lambda Function Inc); and
Nitish Singh, Koen Herps, and Ivo Adan (Eindhoven
University of Technology)
Abstract
Abstract
In a high-tech production environment, capacity
investment and production planning are often
based on the demand information from
manufacturers within a supply chain. A supplier
solicits forecast information from a
manufacturer, and the manufacturer provides
demand forecasts that are updated on a rolling
horizon basis. Problems arise with this setup if
the manufacturer provides volatile forecast
quantities due to the market's fluctuating
demand or internal bias. As a result, suppliers'
mistrust regarding forecast quantities grows,
leading to adjusted production plans based on
planners' anecdotal experience. The paper
presents a decision model to determine the
reliability of forecasts provided by
manufacturers to facilitate better production
planning. The study also suggests alternate
forecasting techniques in case of low
reliability. To evaluate the effectiveness of
the proposed approach, a simulation study is
conducted for different manufacturers and
scenarios. Our experiments showed an average
cost reduction of 14% across all instances.
pdf
Digital Twin Based Learning Framework for Adaptive
Fault Diagnosis in Microgrids with Autonomous
Reconfiguration Capabilities
Temitope Runsewe, Abdurrahman Yavuz, and Nurcin Celik
(University of Miami)
Abstract
Abstract
The world is increasingly reliant on energy
systems, making them a critical infrastructure
for essential services. This also makes them
vulnerable to attacks, which can result in
significant disruptions and damage. Microgrid
(MG) monitoring systems play a crucial role in
ensuring the safety and reliability of energy
systems. However, traditional fault diagnosis
techniques are limited to already established
faults due to the use of only historical data,
making it challenging to keep up with the
increasing demand for safety and reliability.
This paper proposes a digital twin based machine
learning (DTML) framework for fault diagnosis in
MG monitoring systems, with a focus on assessing
the resilience of MG end-to-end systems to
potential disruptions from adversaries. The
proposed framework utilizes digital twin based
random forest (RF) and support vector machine
(SVM) and logistic regression (LR) model and
shows that the RF based model outperforms other
models with an accuracy of 95%.
pdf
Environment Sustainability and Resilience
Technical Session · Environment Sustainability and Resilience
Critical Infrastructures
Chair: Raymond Smith (East Carolina University)
A Network Theory to Quantify and Bound Cyber-risk in
IT/OT Systems
Best Contributed Applied Paper - Finalist
Ranjan Pal (MIT Sloan School of Management), Rohan
Xavier Sequeira (University of Southern California), and
Sander Zeijlemaker and Michael Siegel (MIT Sloan School
of Management)
Abstract
Abstract
IT/OT driven industrial control systems (ICSs)
such as water/power/transportation networks are
increasingly meeting the daily functional needs
of civilian society around the globe. This,
alongside making societal businesses more
automated, efficient, productive, and
profitable. However, often poorly configured IoT
security settings increase the chances of
occurrence of (nation-sponsored) stealthy
spread-based APT malware attacks in ICSs that
might go undetected over a considerable period
of time. The ICS enterprise management is often
keen to get apriori statistical estimates of
cyber-loss impact post any cyber-attack event
such that it can plan ahead on its
cyber-resilience budget. In this paper, we
propose the first mathematical theory, based
upon stochastic processes and concentration
inequalities, to (a) statistically quantify
apriori the cyber-loss impact (distribution) on
an ICS infrastructure network post an APT
cyber-attack event, and subsequently (b) bound
the tail of such a cyber-risk distribution, for
arbitrary impact distributions.
pdf
Safeguarding Infrastructure from Cyber Threats with
NLP-based Information Retrieval
Christin J. Salley, Neda Mohammadi, and John E. Taylor
(Georgia Institute of Technology)
Abstract
Abstract
Natural disasters disrupt systems, leading to
critical infrastructure vulnerabilities prone to
cyber-attacks. The MITRE ATT&CK Enterprise
Matrix is a knowledge base for threat analyses
in the cybersecurity community. Existing
processes to derive possible attack
methodologies from this Matrix are largely
manual and time-consuming. It is essential to
automate the information retrieval process to
reduce human errors, improve efficiency, and
free up resources for identifying unrevealed
cyber-attacks. We propose a framework that
incorporates Natural Language Processing (NLP)
and Text Mining to automatically generate sets
of attack paths from the technique descriptions
in the Matrix. The framework generates
similarity between techniques based on their
descriptions and creates an output showing
potential pathways an adversary can take to
infiltrate a system. The outputs are compared
against an annotated approach and attack report.
The results of this study provide an approach to
more quickly and effectively assess potential
cyber-attacks towards protecting critical
infrastructure.
pdf
Modeling of Circular Economy Strategies for CFRP-made
Aircrafts
Arnd Schirrmann and Uwe Beier (Airbus)
Abstract
Abstract
In a circular economy, recycling of materials at
the end of a product's life cycle is a key
issue. This paper discusses the sustainability
impacts of different recycling strategies for
CFPR-made aircraft and how they weigh up against
alternative measures such as waste reduction and
lower material consumption in the manufacture of
the product. The analysis includes environmental
and cost impacts for different strategies and
market scenarios. A quantitative system dynamic
simulation of the life cycle of an aircraft
program is used. The subject of the life cycle
simulation model is the CFRP mass flow, CO2
emissions and associated costs. In addition, the
effects of R&T investments in new technologies
for recycling and waste prevention as well as
the reduction of material consumption were
investigated.
pdf
Technical Session · Environment Sustainability and Resilience
Food and Supply Chains
Chair: Virginia Fani (University of Florence)
System Dynamics Simulation of External Supply Chain
Disruptions on a Simplified Semiconductor Supply
Chain
Anna Christina Hartwick, Abdelgafar Ismail, Beatriz
Kalil Valladão Novais, Mohammed Zeeshan, and Hans
Ehm (Infineon Technologies AG)
Abstract
Abstract
Due to the vitality of semiconductor products
for other industries, the production of
semiconductors and impact of external
disruptions on the semiconductor supply chain
should be well understood. As semiconductor
manufacturing is accompanied with intrinsic long
manufacturing cycle times ranging from 50 to 100
days where operations run 24/7, 365 days per
year, correct understanding of potential
disturbances should be considered. Examples of
these disturbances include pandemics, extreme
weather events, geopolitical tensions and war.
These hazards pose various risks for supply
chains, for example, the bullwhip and ripple
effect. To simulate the result of such risks, a
simplified system dynamics model of a typical
semiconductor manufacturing supply chain was
constructed using the Anylogic Software. The
model serves as a what-if scenario foundation to
evaluate certain external circumstances
dependent on current global situations to
enhance supply chain resilience
pdf
An Agent-Based Model of Agricultural Land Use in
Support of Local Food Systems
Poojan Patel and Caroline Krejci (University of Texas at
Arlington), Nicholas Schwab (University of Northern
Iowa), and Michael Dorneich (Iowa State University)
Abstract
Abstract
Local food systems, in which consumers source
food from nearby farmers, offer a sustainable
alternative to the modern industrial food supply
system. However, scaling up local food
production to meet consumer demand will require
farmers to allocate more land to this purpose.
This paper describes an agent-based model that
represents commodity-producing Iowa farmers and
their decisions about converting some of their
acreage to specialty crop production for local
consumption. Farmer agents’ land-use
decisions are informed by messages passed to
them via their social connections with other
farmers in their communities and messages from
agricultural extension agents. Preliminary
experimentation revealed that leveraging
extension agents to increase the frequency and
strength of messages to farmers in support of
local food production has a modest positive
impact on adoption. By itself, however, this
intervention is unlikely to yield significant
improvements to food system sustainability.
pdf
Technical Session · Environment Sustainability and Resilience
Simulation for Sustainability
Chair: Jonathan M. Gilligan (Vanderbilt University)
Sustainability Assessment Through Simulation: The
Case Of Fashion Renting
Virginia Fani and Romeo Bandinelli (University of
Florence)
Abstract
Abstract
The fashion industry is widely known as one of
the most environmentally impacting. To address
the overconsumption issue, the fashion renting
business model allows renting clothes or
accessories instead of buying them, extending
the useful life of products. However, concerns
about the sustainability of fashion renting
supply chains are arisen, especially due to
reverse logistics. In this context, a hybrid
simulation model is developed to support fashion
companies in the design and evaluation of
renting supply chain configurations. Through
Discrete Event Simulation (DES) logistics flows
are represented, while Agent-Based Modeling
(ABM) integrated with Geographic Information
System (GIS) allow to represent supply
chain’s nodes in the real environment. GIS
concurs to estimate the sustainability of the
supply chain importing effective data related to
the covered distances. The proposed parametric
model will enable performing scenario analyses
to assess the best configuration in terms of
environmental impact.
pdf
Simulative Analysis of the Sustainability Driven
Transformation of Casting Plants
Johannes Dettelbacher, Wolfgang Schlüter, and
Alexander Buchele (Ansbach University of Applied
Sciences)
Abstract
Abstract
The current energy crisis and high fossil fuel
costs are challenging energy intensive
industries such as non-ferrous foundries. It is
therefore important to promote the transition to
renewable energy sources with the
electrification of melting units. This pilot
study is the first to simulate the transition of
conventional foundries to sustainable
technologies. For this purpose, a simulation
model based on a selected example company is
developed. It takes into account the energy
consumption and the logistical effects of a
converted operation. The simulation model is
implemented as a hybrid simulation combining a
discrete event simulation at the plant level and
a process simulation within the furnaces. The
study shows how a sustainable energy supply can
be achieved in foundries. The effects of
efficiency as well as energy costs and emissions
are also taken into account.
pdf
A Customizable Community-Building-Energy-Modeling
Decision Support System (CCBEM-DSS) for Net-Zero
Planning in Developing Countries
Omprakash Ramalingam Rethnam and Albert Thomas (Indian
Institute of Technology Bombay)
Abstract
Abstract
Buildings contribute to about 40% of global
energy-related CO2 emissions, and reducing
energy demand in buildings has become one of the
vital components of the current climate change
mitigation strategies. Optimizing energy for the
urban building stock by energy-efficient
retrofits is becoming increasingly popular in
developed countries where the functional and
construction elements of the stock are uniform,
along with the updated stock database already
built in desirable standard formats for energy
simulation exchange. However, a decision support
system to arrive at energy-efficient retrofits
for developing countries where the building
stock is highly diverse, with varying
construction and operational philosophies, and
has no readily available datasets of existing
stock is highly challenging. To close this gap,
this study suggests an adaptable decentralized
community building energy simulation and
modeling schema using free and open-source tools
for retrofit decision-making.
pdf
Technical Session · Environment Sustainability and Resilience
Electric and Autonomous Transportation
Chair: Neda Mohammadi (Georgia Institute of Technology)
Simulation, Optimization and Control of Trajectories
of ASVs Performing HACBS Monitoring Missions in Lentic
Waters
Alfredo Gonzalez-Calvin, Lía García-Perez,
José Luis Risco-Martín, and Eva Besada-Portas
(Complutense University of Madrid)
Abstract
Abstract
Harmful Algae and Cyanobacteria Blooms (HACBs)
are dangerous dynamic processes for the
users/inhabitants of the hydric resources. Their
development and contingency plans can be
anticipated by using Autonomous Surface Vehicles
(ASVs) equipped with a self-driven system
capable of deciding how to displace the ASV and
its multi-parametric probe to take measurements
in the 3D locations of the water body where the
HACB is likely to occur. This paper presents a
new self-driven system for that purpose,
consistent on 1) an offline trajectory planner
for the ASV that exploits the information
provided by a commercial HACBs simulator to
optimize, in turn, the ASV horizontal and probe
vertical displacements; and 2) a guidance and
control system specially designed for making the
ASV follow the planned trajectories. The paper
also presents a comprehensive set of simulations
to evaluate our proposal's performance and
adjust its parameters.
pdf
Lightweight Smart Charging vs. Immediate Charging
with Buffer Storage: Towards a Simulation Study for
Electric Vehicle Grid Integration at Workplaces
Paul Benz and Marco Pruckner (Universität
Würzburg)
Abstract
Abstract
The present study investigates the extension of
an existing simulation model combining system
dynamics and discrete event simulation by linear
optimization for an electric vehicle charging
system. The existing simulation framework is
extended by a smart charging strategy based on
linear programming in order to exploit the
flexibility of real charging processes at a
workplace parking lot for a better integration
of solar photovoltaic electricity generation.
Therefore, different smart charging strategies
are evaluated. In multiple simulation runs, the
strategies are compared with immediate charging
using a stationary battery energy storage system
for intermediate storage of electricity
generated by solar photovoltaic. Results show
that smart charging strategies can achieve
similarly good results with respect to the
self-sufficiency rate and self-consumption rate.
In the context of a 100kWp PV system the
combination of optimizing charging rates and
stationary battery energy storage resulted in
self-sufficiency rates of more than 90% in the
simulation.
pdf
A Simulation-Based Decision Support Tool for Direct
Current Fast Charger Installations
Cathy Rupp (BC Hydro); Deep Jariwala, Suellen Ventura,
and Scott Nason (SAS Institute (Canada) , Inc); Bahar
Biller (SAS Institute, Inc); and Yanan Sun and Parvir
Girn (BC Hydro)
Abstract
Abstract
We develop a simulation-based tool for
supporting direct current fast charger (DCFC)
installation decisions. Our simulation captures
details of the DCFC network configuration,
non-stationary arrival patterns of the electric
vehicles to the fast charging DCFC stations,
various DCFC attributes, charging time
distributions, and customer behavior. The
statistical analysis of the simulation generated
output data produces various key performance
indicators (KPIs) including DCFC utilizations,
number of electric vehicles charged and left
uncharged, and queueing experience of the
customers. One of the key challenges of
developing this simulation is its validation: we
have validated the simulation with the
historical DCFC charging session data and past
observations of the DCFC utilizations. The
resulting data-driven simulation is used for
supporting DCFC planning through its capability
to conduct scenario analysis and predict various
KPIs.
pdf
Technical Session · Environment Sustainability and Resilience
Water and Environmental Resources
Chair: Christin Salley (Georgia Institute of
Technology)
Equity-Driven Management of Essential Environmental
Resources Under Price-Based Consumption
Shai Amouyal and Noa Zychlinski (Technion - Israel
Institute of Technology)
Abstract
Abstract
The global climate crisis and population growth
restrict the availability of essential
environmental resources such as water and energy
and this situation continues to deteriorate. If
and when conditions become extreme, only the
well-offs will have access to these valuable
resources. With that in mind, we look for
solutions to achieve equity within societies
while preserving, the degree possible, natural
resources. We suggest a method for setting
differential pricing for each population
stratum, so that each spends a relatively
similar percentage of their income on these
basic commodities, without depleting valuable
resources. Our method optimizes the prices while
simultaneously estimating the unknown
consumption–price relation. We show the
effectiveness of our method based on data from
Israel and through extensive simulation
experiments reflecting different levels of
income inequality within societies, different
consumption–price relations, and resource
availability. Our study shows that equity and
resource preservation can go hand-in-hand.
pdf
Modeling the Dynamics of Sediment Transport, Tides,
and Sea-Level Rise: Implications for the Resilience of
Coastal Bengal
Christopher M. Tasich, Jonathan M. Gilligan, and George
M. Hornberger (Vanderbilt University)
Abstract
Abstract
The coastal zone of the
Ganges-Brahmaputra-Meghna (GBM) Delta is widely
recognized as one of the most vulnerable places
to sea-level rise (SLR), with around 57 million
people living within 5 m of sea level. Sediment
transported by the Ganges, Brahmaputra, and
Meghna rivers has the potential to raise the
land and offset SLR. There is significant
uncertainty in future sediment supply and SLR,
which raises questions about the sustainability
of the delta. We present a simple model, driven
by basic physics, to estimate the evolution of
the landscape under different conditions at low
computational cost. Using a single tuning
parameter, the model can match observed rates of
land aggradation. We find a strong negative
feedback, which robustly brings land elevation
into equilibrium with changing sea level. We
discuss how this model can be used to
investigate the dynamics of sediment transport
and the sustainability of the GBM Delta.
pdf
Infrastructure Planning Using a Dynamic Simulation to
Improve Sustainability and Resilience: Case Study for
a Coastal Watershed
Raymond Smith (East Carolina University)
Abstract
Abstract
Climate change presents a significant challenge
for many coastal communities as sea level rise
is expected to cause widespread and chronic
flood inundation. This study examines the case
of a coastal watershed of ecological importance,
which is threatened by sea level rise and land
subsidence, as well as seasonal severe storms.
The health of the watershed and flood inundation
protection to the community depends on water
outflow; something which sea level rise will
further restrict. Infrastructure planning for an
active water management solution resilient to
severe storms and electrical grid disruptions is
needed. A dynamic simulation is used to evaluate
microgrid energy system design performance and
effectiveness in powering a critical
infrastructure pumping station during
storm-related electrical grid outage and
restoration scenarios.
pdf
Track Coordinator - Introductory Tutorials: Sanjay Jain (The George Washington University), Chang-Han
Rhee (Northwestern University)
Tutorial · Introductory Tutorials
Importance Sampling for Minimization of Tail Risks: A
Tutorial
Chair: Chang-Han Rhee (Northwestern University)
Anand Deo (Indian Institute of Management Bangalore) and
Karthyek Murthy (Singapore University of Technology and
Design)
Abstract
Abstract
This paper provides an introductory overview of
how one may employ importance sampling (IS)
effectively as a tool for solving stochastic
optimization formulations incorporating tail
risk measures such as Conditional Value-at-Risk.
Approximating the tail risk measure by its
sample average approximation, while appealing
due to its simplicity and universality in use,
requires a large number of samples to be able to
arrive at risk-minimizing decisions with high
confidence. In simulation, IS is among the most
prominent methods for substantially reducing the
sample requirement while estimating
probabilities of rare tail events. Can IS be
similarly effective for optimization as well?
This tutorial aims to provide an overview of the
two key ingredients in this regard, namely, (i)
how one may arrive at an effective importance
sampling change of measure prescription at every
decision, and (ii) the prominent techniques
available for integrating such a prescription
within a solution paradigm for stochastic
optimization.
pdf
Tutorial · Introductory Tutorials
Event Graphs: Syntax, Semantics, and Implementation
Chair: Md Tariqul Islam (Purdue University)
Murat M. Gunal (Fenerbahce University); Yahya Ismail
Osais (King Fahd University of Petroleum and Minerals,
Interdisc. Research Center for Intellig. Secure
Systems); and Gerd Wagner (Brandenburg University of
Technology)
Abstract
Abstract
This tutorial aims to introduce Event Graphs
(EGs), invented 40 years ago by Lee Schruben to
allow event-based modeling of discrete dynamic
systems. Their simplicity and naturalness in
causality modelling and simulation modelling
made EGs popular in research and practice. In a
simulation, an event causes state changes in a
system as well as other events to happen in the
future. EGs provide a parsimonious diagram
representation for the Event Scheduling paradigm
of Discrete Event Simulation. We first introduce
their visual syntax and informal semantics, and
then present a recent extension by adding
objects to EGs. Our tutorial also includes an
introduction to the formal semantics of EGs and
a Python implementation for executing EGs.
pdf
Tutorial · Introductory Tutorials
Simulation-Driven Digital Twins: The DNA of Resilient
Supply Chains
Chair: David T. Sturrock (Simio LLC)
Stephan Biller (Purdue University) and Paul Venditti,
Jinxin Yi, Xi Jiang, and Bahar Biller (SAS Institute,
Inc)
Abstract
Abstract
This tutorial defines what a digital twin is and
outlines its four required characteristics.
Digital twins are developed to derive insights
to control entities and processes in the digital
world with simulation as one of the key
technologies lying at the heart of this
development. The resulting insights are used to
prescribe actions in the physical world to fix
future problems before they happen. This
tutorial describes the key digital twin
development functions together with the digital
twin enabling technologies with focus on the use
of simulation for process twin development. The
corresponding functions and technologies are
displayed on several different digital twin
development frameworks with the potential to
serve as guides for practitioners interested in
developing digital twin solutions. We conclude
with an example of a supply chain digital twin
use case and the role of simulation and AI in
the twin development.
pdf
Tutorial · Introductory Tutorials
Tested Success Tips for Simulation Project Excellence
Chair: Björn Johansson (Chalmers University of
Technology)
David T. Sturrock (Simio LLC)
Abstract
Abstract
How can you make your projects successful?
Modeling can certainly be fun, but it can also
be quite challenging. With the new demands of
Smart Factories, Digital Twins, and Digital
Transformation, the challenges multiply. You
want your first and every project to be
successful, so you can justify continued work.
Unfortunately, a simulation project is much more
than simply building a model -- the skills
required for success go well beyond knowing a
particular simulation tool.
pdf
Tutorial · Introductory Tutorials
Design and Analysis of Simulation Experiments Using Three
Simple Statistical Formulas
Chair: Sanjay Jain (The George Washington University)
Averill Law (Averill M. Law & Associates, Inc.)
Abstract
Abstract
Output-data analysis is arguably the
most-researched topic in the field of simulation
modeling, with more than 1000 technical papers
having been written. However, many of the
published papers are highly mathematical in
nature, making them difficult to understand for
many simulation practitioners. In this tutorial,
we discuss the replication and
replication/deletion approaches which can
address most analysis problems using three
simple formulas (or expressions) from a first
undergraduate statistics course. Although the
replication approaches discussed above are
widely used for estimating the mean of a single
simulated system, we show that the same three
formulas can also be used to compare any number
of simulated systems, to handle multiple system
performance measures simultaneously, and also to
estimate performance measures such as
probabilities and percentiles rather than just
means. We also discuss a relatively simple
graphical methodology for determining a warmup
period if steady-state characteristics are of
interest.
pdf
Tutorial · Introductory Tutorials
Statistical Uncertainty Quantification for Expensive
Black-Box Models: Methodologies and Input Uncertainty
Applications
Chair: Chang-Han Rhee (Northwestern University)
Henry Lam (Columbia University)
Abstract
Abstract
This tutorial reviews methodologies for
quantifying statistical uncertainty in
computationally expensive black-box models,
which arise frequently in data-driven simulation
analyses under input uncertainty. When facing
these models, it can be difficult to run
repeated evaluations due to computation cost,
and also to obtain auxiliary information such as
gradients due to analytical intractability, thus
rendering many traditional statistical
approaches challenging to apply. We describe
several lines of approaches to resolve these
challenges, including data-splitting methods
based on batching variants, a recent so-called
cheap bootstrap approach, and subsampling
schemes. We discuss the applications of these
approaches to simulation, including problems
suffering from both aleatory error exhibited via
Monte Carlo noises and epistemic error stemming
from the input uncertainty.
pdf
Tutorial · Introductory Tutorials
Tutorial: Basics of Metamodeling
Chair: Paulo Victor Freitas Lopes (Chalmers University of
Technology, Aeronautics Institute of Technology)
Russell Barton (The Pennsylvania State University)
Abstract
Abstract
Metamodels are fast-to-compute mathematical
models that are designed to mimic the
input-output behavior of discrete-event or other
complex simulation models. Linear regression
metamodels have the longest history, but other
model forms include Gaussian process regression
and neural networks. This introductory tutorial
highlights basic issues in choosing a metamodel
type and specific form, and making simulation
runs to fit the metamodel. The tutorial ends
with a warning on potential pitfalls, and
suggestions on further reading to expand your
knowledge of metamodeling.
pdf
Tutorial · Introductory Tutorials
An Introduction to Discrete-event Modeling and Simulation
with DEVS
Chair: Russell R. Barton (Pennsylvania State
University)
An Introduction to Discrete-Event Modeling and
Simulation with DEVS
Yentl Van Tendeloo and Randy Paredis (University of
Antwerp) and Hans Vangheluwe (University of Antwerp,
Flanders Make)
Abstract
Abstract
The Discrete-Event System Specification (DEVS)
is a formalism devised by Bernard Zeigler in the
late 1970s for modeling complex dynamical
systems using a discrete-event abstraction. At
this abstraction level, a timed sequence of
pertinent "events'' input to a system causes
instantaneous changes to the state of the
system. The main advantages of DEVS are its
precise, implementation independent
specification, and its support for modular,
hierarchical composition. This tutorial
introduces the Classic DEVS formalism in a
bottom-up fashion, using a simple traffic light
example. The syntax and operational semantics of
Atomic (i.e., non-hierarchical) and of Coupled
(i.e., hierarchical, connecting interacting
components) models are introduced. Finally, a
simplified DEVS model for performance analysis
of vessel movements in the Port of Antwerp is
presented. All examples in the paper use
PythonPDEVS, though other DEVS tools could
equally well be used. We conclude with
suggestions for further reading on DEVS theory,
variants, and tools.
pdf
Healthcare and Life Sciences
Track Coordinator - Healthcare and Life Sciences: Bjorn Berg (University of Minnesota), Masoud Fakhimi
(University of Surrey), Tugce Martagan (Eindhoven University
of Technology)
Technical Session · Healthcare and Life Sciences
Simulation Modeling for COVID I
Chair: Christine Currie (University of Southampton)
Using Simulation to Study the Impact of Covid-19
Policies on the Availability of Childcare
Adam Cahall, Jasmine Eng, Jane Gao, Ben Hilbert, and
Jamol Pender (Cornell University)
Abstract
Abstract
The COVID-19 pandemic has had a profound impact
on the lives of working parents, who are
struggling to balance their responsibilities at
work and at home, as well as childcare providers
who are working hard to keep their doors open.
In this paper, we examine the effect of
childcare policies on the availability of
childcare. Specifically, we investigate how
classroom size, the likelihood of COVID-19
infection, and the number of days a classroom
may need to close affect the amount of time
parents will need to stay at home with their
children. Our results show that even low
probabilities of infection combined with
stringent policies can have a large impact on
the duration of a child's exclusion from
childcare services.
pdf
Enhancing Pandemic Preparedness Using Mean Field and
Simulation Modeling
Mohammad Dehghanimohammadabadi (Northeastern University)
and Gökçe Dayanıklı (University of
Illinois at Urbana-Champaign)
Abstract
Abstract
The COVID-19 pandemic has emphasized the
importance of preparedness and response plans
for healthcare providers and rational responses
from society to effectively manage infectious
disease outbreaks. Strategic guidelines should
be created to ensure the availability of
required resources while considering the
rational response of individuals under different
policy scenarios. This study uses a
simulation-optimization-game theory approach to
first determine the daily number of infected
people in response to social distancing policies
in a game theoretical setup. Second, this daily
number of infected people is used in a
simulation to determine an optimal replenishment
policy for restocking personal protective
equipment (PPE) items. The model incorporates a
combination of mean field games modeling and a
simulation model in Simio to perform
optimization tasks. This approach aims to
guarantee the availability of required resources
by taking into account the rational response of
individuals under different policy scenarios.
pdf
Equitable Allocation of Scarce Resources during the
COVID-19 Pandemic: A Case Study for Convalescent
Plasma Distribution
Jasdeep Singh Dhahan and Alexander Rutherford (Simon
Fraser University), Andrew Shih (University of British
Columbia), Na Li (University of Calgary), and Douglas
Down (McMaster University)
Abstract
Abstract
Resource planning during pandemics presents many
challenges and equitable decisions about
resource allocation must be made. There is no
standard definition of equity. Robust
mathematical formulations can require a lot of
data. In a novel pandemic there is limited
historical information available to inform
decisions. Decision makers can look to define
equity through population proportions
(pro-rata). This notion of equity is readily
implementable. We present a practical framework
for an equitable allocation of scarce resources
using population proportions, disease
demographics, and resource utilization. We
assess our framework using a stochastic
simulation model, calibrated to COVID-19 case
data, in a case study for convalescent plasma
distribution in the context of the clinical
trial CONCOR-1. We show that pro-rata resource
allocation can be inequitable and that decision
makers can consider readily available
information, such as resource utilization and
case data, to inform equity and proactively
manage scarce resources during a pandemic.
pdf
Technical Session · Healthcare and Life Sciences
Simulation Modeling for COVID II
Chair: Yuming Sun (Georgia Institute of Technology)
A Multi-Team Multi-Model Collaborative COVID-19
Forecasting Hub for India
Aniruddha Adiga (University of Virginia); Siva Athreya
(International Centre for Theoretical Sciences-TIFR,
Indian Statistical Institute); Kantha Rao Bhimala (CSIR
Fourth Paradigm Institute); Ambedkar Dukkipati and Tony
Gracious (Indian Institute of Science); Shubham Gupta
(IBM Research Europe); Benjamin Hurt, Gursharn Kaur,
Bryan Lewis, and Madhav Marathe (University of
Virginia); Vidyadhar Mudkavi and Gopal Krishna Patra
(CSIR Fourth Paradigm Institute); Przemyslaw Porebski
(University of Virginia); Nihesh Rathod and Rajesh
Sundaresan (Indian Institute of Science); Srinivasan
Venkataramanan (University of Virginia); and Sarath
Yasodharan (Indian Institute of Science)
Abstract
Abstract
During the COVID-19 pandemic, India has seen
some of the highest number of cases and deaths.
Quality of data, continuously changing policy,
and public health response made forecasting
extremely difficult. Given the challenges in
real-time forecasting, several countries had
started a multi-team collaborative effort.
Inspired by these works, academic partners from
India and the United States setup a repository
for aggregating India-specific forecasts from
multiple teams. In this paper, we describe the
effort and the challenges in setting up the
repository. We discuss the development of
simulations of compartmental models to model
specific waves of the pandemic and show that the
simulation model designed specifically for the
Omicron wave was able to predict the onset and
peak sizes accurately. We employed a
median-based ensemble model to aggregate the
individual forecasts. We observed that
median-based ensemble was relatively stable
compared to the constituent models and was one
of better performing models.
pdf
Multi-criteria Simulation Optimization for COVID-19
Testing in Schools
Yiwei Zhang, Maria Mayorga, Julie Ivy, and Julie Swann
(North Carolina State University)
Abstract
Abstract
Evidence has shown that random screening tests
are effective in reducing COVID-19 infections in
schools. However, test administration may be
hindered due to a limited budget or low
participation caused by pandemic fatigue. Thus,
we seek to balance the number of tests
administered with end-of-semester infections. To
do this we use an SEIR model to simulate
SARS-CoV-2 transmissions within K-12 schools,
design a multi-objective simulation optimization
problem, and tune an NSGA-II algorithm to find
the best testing schedules. We find the Pareto
front of optimal schedules of screening tests,
which can be used by stakeholders to inform test
administration strategies. We discuss insights
about the characteristics of optimal strategies,
for example, when there are limited number of
tests available or a desire to use few tests,
the optimal plan is to perform the tests earlier
in the semester and at higher intensity.
pdf
Endogenous Human Behavior in Models of COVID-19
Transmission: A Systematic Scoping Review
Alisa Hamilton (Johns Hopkins University, Center for
Systems Science and Engineering; One Health Trust);
Fardad Haghpanah, Sasha Tulchinsky, Nodar Kipshidze, and
Suprena Poleon (One Health Trust); Gary Lin (Johns
Hopkins Applied Physics Laboratory, One Health Trust);
Hongru Du and Lauren Gardner (Johns Hopkins University,
Center for Systems Science and Engineering); and Eili
Klein (One Health Trust; Johns Hopkins University,
Department of Emergency Medicine)
Abstract
Abstract
While mathematical models of disease have been
important drivers of public policy since the
eighteenth century, the incorporation of
endogenous behavior driven by risk perception is
a relatively recent phenomenon (Klein et al.,
2007). Models incorporating behavior as
endogenous variables may enhance their
usefulness by providing an explicit mechanism
for how behavior varies in response to public
health measures and epidemic dynamics, resulting
in a more nuanced understanding of disease
transmission. We conducted a systematic scoping
review to understand the extent to which
endogenous behavior was incorporated into models
of COVID-19 transmission.
pdf
Technical Session · Healthcare and Life Sciences
Improving Emergency Department Efficiency Using Simulation
Chair: Vishnunarayan Girishan Prabhu (University of North
Carolina at Charlotte)
Measuring Emergency Department Resilience to Demand
Surge: A Discrete-Event Simulation Framework
Eman Ouda, Andrei Sleptchenko, and Mecit Can Emre
Simsekler (Khalifa University) and Ghada R. El-Eid
(Sheikh Shakhbout Medical City)
Abstract
Abstract
This research explores the resilience components
in emergency departments (EDs) during surges
through discrete-event simulation (DES). By
focusing on the resistance and recoverability
components, the resilience of the ED is
analyzed, as well as the flow of the patient and
the resources required at each step. A
simulation is developed to model an ED in the
UAE and validated through collected timestamps.
The results demonstrate the ordinary conditions
of the ED and its calculated resilience,
recoverability, and resistance, as well as its
strength under conditions of surge demand. To
investigate the impact of resources on the
ED’s resilience, the resilience triangle
is analyzed, and different interventions are
applied by adding physicians, nurses, and beds
and their effects. The methodology and
simulation model provides significant insights
to ED managers to evaluate and improve their
department’s resilience during surges and
emergencies.
pdf
Analysis of the Resilience of an Emergency
Department: the Case of Accident with Multiple
Victims
Mariela Ester Rodriguez (National University of Jujuy);
Francesc Boixader (Computer Science School, Autonomous
University of Barcelona); Francisco Epelde (Consultant
Internal Medicine, Autonomous University of Barcelona);
Alvaro Wong (Autonomous University of Barcelona); Eva
Bruballa (Computer Science School, Autonomous University
of Barcelona); Armando De Giusti (National University of
La Plata); and Dolores Rexach and Emilio Luque
(Autonomous University of Barcelona)
Abstract
Abstract
The care of multiple victims such as natural
disasters in an Emergency Department is
critical. This differs from ordinary care by the
number of patients that arrive, their severity
and the insufficient staff for these events.
Designing and simulating this real life scenario
will be useful for disaster management decision
makers. The objective of this simulation is to
model a system with resilience to critical
situations. To model the input of this research,
we worked with the percentage of patients
received by the Cauquenes Hospital during the
Chilean Earthquake February 27, 2010. A
comparison of two situations is made: the
admission of patients before an earthquake with
a normal daily attention versus the admission of
patients before an earthquake and the activation
of the relief chain. The latter situation allows
the system to be resilient and adapt quickly to
its new reality.
pdf
A Generalized Symbiotic Simulation Model of an
Emergency Department for Real-Time Operational
Decision-Making
Alexander R. Heib, Christine S. M. Currie, Bhakti
Stephan Onggo, and Honora K. Smith (University of
Southampton) and James Kerr (Hampshire Hospitals NHS
Foundation Trust)
Abstract
Abstract
We describe the design of a generalizable
simulation model of an emergency department (ED)
that forms part of a symbiotic simulation tool
designed to improve short-term decision-making.
While the paper will give an overview of the
planned symbiotic simulation tool, our focus
here is on the generalizability of the
simulation model. The model is coded such that
the routing logic of patient pathways are not
explicitly defined but are instead included as
an input parameter. By structuring the model
this way, the pathways can instead be discovered
through process mining methods on standard
healthcare transactions data. This enables the
simulation model to be applied to other EDs
without redesigning all of the logical flows
within the model. As symbiotic simulation tools
are designed for ongoing use within the system
they model, utilizing process mining also allows
for automating recalibration of the patient
pathways if changes occur in the physical
system.
pdf
Technical Session · Healthcare and Life Sciences
Discrete-event Simulation Models to Inform Healthcare
Decisions
Chair: Marta Staff (University of Exeter)
Estimating Quantile Fields for a Simulated Model of a
Homeless Care System
Dashi I. Singham (Naval Postgradaute School)
Abstract
Abstract
We construct a simulation model of a homeless
care system to determine the amount of new
housing and emergency shelter needed to support
the growing unsheltered population in Alameda
County, California. To quantify the performance
of the system, we assess the number of people
having unmet need via an estimate of the
quantile field using a recently developed
batching method. This approach helps right-size
the amount of housing and shelter resources
needed to quickly provide services to the
unsheltered population. We find that with a
large investment in housing to help the system
reach steady state, current levels of emergency
shelter may be sufficient to serve those with
unmet need.
pdf
Measuring the Operational Impacts of Right-Sizing
Prenatal Care Using Simulation
Leena Ghrayeb, Timothy Bryan, Meghana Kandiraju, Tejas
Maire, Yuanbo Zhang, Amy Cohn, and Alex Peahl
(University of Michigan)
Abstract
Abstract
Despite high levels of spending on prenatal
care, the U.S. has the worst maternal mortality
outcomes amongst peer high-income nations. In
response to a growing need for modernized
prenatal care policies, national prenatal care
stakeholders have developed a new model of
prenatal care, which moves away from a
“one-size-fits-all” model of
prenatal care delivery, and instead tailors care
to patients’ specific needs. In this
article, we develop a data-driven discrete event
simulation model to quantify the operational
impacts of adopting this new care paradigm. We
consider a case study of a large academic health
center, and derive input parameters for the
model from historical data. Our results suggest
that when compared with the
“one-size-fits-all” model of care,
the new tailored care policy leads to reduced
patient delays, as well as a reduction in
overbooking, implying increased flexibility in
the system.
pdf
Open-Source Modeling for Orthopedic Elective Capacity
Planning Using Discrete-Event Simulation
Alison Harper, Martin Pitt, and Thomas Monks (University
of Exeter)
Abstract
Abstract
The increase in elective surgical waiting lists
as a result of the COVID-19 pandemic is creating
significant consequences for health services
worldwide. In the UK, the allocation of capital
funds to increase capacity for managing elective
waits has created planning and operational
challenges for health services. This paper
reports on the development and deployment of an
interactive web-based discrete-event simulation
model for supporting capacity planning of
surgical activity and ward stay in a proposed
new ring-fenced orthopedic facility in a UK
health service. The model is free and
open-source and developed to be generic and
applicable for new capacity planning of elective
recovery in orthopedics in other regions. With
minor adaptations it can also be readily
modified for application to other specialties.
Given the current relevance of managing record
elective waiting lists, there is potential
widespread applicability of the simulation model
which is supported by our open approach to
modeling.
pdf
Technical Session · Healthcare and Life Sciences
Simulation Modeling for Covid-19 III
Chair: Arindam Fadikar (Argonne National Laboratory)
Evaluating Parallelization Strategies for Large-Scale
Individual-Based Infectious Disease Simulations
Johannes Ponge (University of Münster), Lukas Bayer
(RPTU Kaiserslautern-Landau), Dennis Horstkemper
(University of Münster), Wolfgang Bock (RPTU
Kaiserslautern-Landau), and Bernd Hellingrath and
André Karch (University of Münster)
Abstract
Abstract
Individual-based models (IBMs) of infectious
disease dynamics with full-country populations
often suffer from high runtimes. While there are
approaches to parallelize simulations, many
prominent epidemic models exhibit single-core
implementations, suggesting a lack of consensus
among the research community on whether
parallelization is desirable or achievable.
Rising demands in model scope and complexity,
however, imply that performance will continue to
be a bottleneck. In this paper, we discuss the
requirements and challenges of parallel IBMs in
general and the German Epidemic Micro-Simulation
System (GEMS) in particular. While the
exploitation of unique model characteristics can
yield significant performance improvement
potential, parallelization strategies generally
necessitate trade-offs in either hardware
requirements, model fidelity, or implementation
complexity. Therefore, the selection of
parallelization strategies requires a
comprehensive assessment. We present a
point-based evaluation scheme to assess the
potential of parallelization strategies as our
main contribution and exemplify its application
in the context of GEMS.
pdf
Determining the Impact of Facility Layout Methods on
Walk-in Covid-19 Vaccine Clinics: A Theoretical
Exploration
S. Yasaman Ahmadi and Jennifer Lather (University of
Nebraska Lincoln)
Abstract
Abstract
Ensuring safety and public health is a paramount
concern in mass vaccination against contagious
respiratory infections. This study examines the
effects of layout methods and path routing
decisions on average patient travel distance
(TD) and time-in-system (TIS) within the context
of a theoretical mass vaccination clinic. Two
distinct layout methods, Perimeter and
Serpentine, are evaluated in conjunction with
two path routing conditions, Cyclical and
Unidirectional. Employing discrete-event
simulation, the study investigates multiple
patient turnouts and clinic operational hours.
The results reveal the significant impact of
layout on average TD, underscoring the
heightened efficiency of the Perimeter layout
and Unidirectional path. Furthermore, the
findings highlight the significant effect of
layout method on TIS when considering optimal
staffing configurations. Conversely, the
analysis indicates that path directionality does
not exert a statistically significant effect.
This study emphasizes the critical role of
layout design in optimizing vaccination clinics
for efficiency and effectiveness.
pdf
A Network-based Analytics Framework For
High-resolution Agent-Based Epidemic Simulation
Ensembles
Amro Alabsi Aljundi, Galen Harrison, Jiangzhuo Chen,
Madhav Marathe, Henning S. Mortveit, Anil Vullikanti,
and Abhijin Adiga (University of Virginia)
Abstract
Abstract
High-resolution network-based contagion models
are being increasingly used to study complex
disease scenarios. Due to network-induced
heterogeneity and sophisticated disease and
intervention models, even simple simulation
exercises can lead to large volumes of complex
simulation outcomes. New approaches are required
to analyze them. Simulations of such network
spread processes can be viewed as attributed
temporal graphs. We describe a network-based
analytics framework that enables a user to
leverage this graphical viewpoint and apply
graph mining methods to perform fine-grained
analysis of the simulation outcomes and the
underlying network. The framework is based on a
microservices-oriented architecture, and is
designed to be general, adaptable, and scalable.
We demonstrate its utility through a case study
motivated by the COVID-19 pandemic involving the
spread of two variants on a large realistic
population network with multiple interventions.
We study the transmissions within and between
age-groups, importance of non-essential
interactions, and efficacy of interventions.
pdf
Technical Session · Healthcare and Life Sciences
Medical Decision Analysis
Chair: Navonil Mustafee (University of Exeter, The
Business School)
Continuous-Time Survival Model Study Designs for
Heart Recovery Applications
Jason Bodnar (ABIOMED, Inc.)
Abstract
Abstract
Due to the aging global population, the science
of heart recovery is an essential area for
research to improve patient health, reduce
time-to-discharge, and delay overall mortality.
New medical device technology is needed to
advance these goals. For the medical community
to gain trust in and use these technologies in
their hospital environments, optimal study
design and proper execution of randomized
controlled trials is necessary. Such RCTs will
result in the collection of valid scientific
evidence for establishing the new device’s
risk and benefit profile in targeted patient
populations. Continuous time-to-event survival
models are commonly used to determine the amount
of data needed to demonstrate an improvement in
these profiles over current standard-of-care
therapies. This paper will compare simulated
power functions and sample size requirements for
a variety of survival methods in a two-sample
RCT setting. Simulation scenarios will encompass
various effect sizes, survival distribution
forms, and time-to-event density functions.
pdf
KSIM 2.0: A Simulation of Kidney Allocation Using
OPTN Records
Masoud Barah (Northwestern University), Vikram Kilambi
(RAND Corporation), and Sanjay Mehrotra (Northwestern
University)
Abstract
Abstract
The Organ Procurement and Transplantation
Network (OPTN) in the US allocates kidneys for
transplantation, but nearly one fifth of kidneys
from deceased donors are not utilized due to the
avoidance of transplantation for kidneys that
have been removed from a donor for too long. To
be able to provide clinically relevant
recommendations to the OPTN contractor, we
updated the KSIM discrete event simulation of
kidney allocation in the academic literature
using actual OPTN individual-level records for
patients and donors. As a case study, we
simulated offering kidneys at high risk of
discard to the first accepting transplant center
after 10 hours of accumulated cold time and
found increased utilization. The updated model
allows for greater clinical fidelity and can be
embedded in medical decision support systems.
pdf
Modeling and Simulation of the SARS-CoV-2 Lung
Infection and Immune Response with Cell-DEVS
Ali Ayadi (University of Strasbourg, ICube laboratory);
Claudia Frydman (Aix Marseille Université); and Quy
Thanh Le (Da Nang University of Science and Technology)
Abstract
Abstract
Understanding why patients' viral loads vary
dramatically across individuals is a critical
challenge in addressing respiratory infections,
especially the severe acute respiratory syndrome
coronavirus 2 (SARS-CoV-2). The spatial-temporal
dynamics of viral infection in the respiratory
system and the immune system's response remain
difficult to study. Using modelling and
simulation (M&S) techniques may address this
problem. In this paper, we present a novel
modelling approach using the Cell-DEVS formalism
(a combination of Cellular Automata and DEVS),
to simulate the spatial-temporal dynamics of
viral spread in the lungs. Using a
two-dimensional cellular space that mimics a
lung, the proposed approach focuses also on the
immune system response, viral infection spread,
state of lung epithelial tissue damage, and
immune cells' state. We demonstrate the
pertinence of our proposal on three different
scenarios representing three types of patients.
Qualitative evaluation by expert biologists
confirms that the produced simulations match the
observations made on patients.
pdf
Technical Session · Healthcare and Life Sciences
Patient Flow Through Healthcare Processes
Chair: Alison Harper (University of Exeter, The Business
School)
Integrating Home Health Care and Patient
Transportation: A Sample Average Approximation
Approach to Optimize Scheduling and Routing
Lorena Silvana Reyes Rubiano (Universidad de La Sabana,
RWTH Aachen University); Marcel Müller (Otto von
Guericke University Magdeburg); Jana Voegl (University
of Natural Resources and Life Sciences Vienna); Angelica
Sarmiento (Colombian School of Engineering Julio
Garavito); William Javier Guerrero (Universidad de La
Sabana); and Patrick Hirsch (University of Natural
Resources and Life Sciences Vienna)
Abstract
Abstract
This study introduces an innovative strategy for
addressing the Home Healthcare and Dial-a-Ride
Problem (HHCDAP) concerning the transportation
of medical staff and patients, taking into
account the stochastic nature of service and
travel times. The problem involves assigning
suitable medical staff to patients and clients,
determining the order of visits, and identifying
opportunities for medical staff and patients to
share trips. We propose two objective functions
to minimize travel time for drivers and medical
staff. This problem adheres to numerous
constraints, including maximum work duration,
maximum waiting time, professional
qualifications, and vehicle capacity
limitations. We test our approach on a
small-scale instance to understand the
trade-offs between minimizing drivers' travel
time and minimizing the travel and waiting times
of medical staff and patients. Our results
indicate that the proposed strategy enhances the
efficiency of transporting medical staff and
patients.
pdf
A Preliminary Predictive Simulation Model for Hip and
Knee Replacement Profile-Dependent Pathway
Stages
Ahmed Bakali El Kassimi (Ecole des Mines de
Saint-Etienne, Univ Clermont Auvergne, INP Clermont
Auvergne, CNRS, UMR 6158 LIMOS); Marianne Sarazin
(Clinique Médico-Chirurgicale Mutualiste, Groupe
Aésio Santé); Xiaolan Xie (Ecole des Mines de
Saint-Etienne, Univ Clermont Auvergne, INP Clermont
Auvergne, CNRS, UMR 6158 LIMOS); and Pierre-Luc Fresard
and Bertand Semay (Clinique Médico-Chirurgicale
Mutualiste, Groupe Aésio Santé)
Abstract
Abstract
Total hip and knee arthroplasty (THA/TKA)
surgeries are frequently performed on elderly
individuals and consist of preoperative,
operative, and rehabilitation stages. Despite
efforts to improve patient satisfaction,there is
a lack of personalized studies that optimize the
THA/TKA pathway. Our aim is to address this gap
by proposing a predictive simulation model that
considers patient-specific factors to enhance
patient satisfaction and organizational
efficiency. To achieve this, we propose using
process mining techniques to analyze the French
national healthcare database and distinguish
between standard care phases and
patient-dependent phases. We then apply machine
learning algorithms to predict specific stages
of care. The insights gained from these analyses
are used to compare and test predicted patient
pathways and their performances using our
simulation model.
pdf
Forecasting Patient Arrivals and Optimizing Physician
Shift Scheduling in Emergency Departments
Vishnunarayan Girishan Prabhu (University of North
Carolina); Kevin Taaffe (Clemson University); and Ronald
Pirrallo, William Jackson, Michael Ramsay, and Jessica
Hobbs (Prisma Health-Upstate)
Abstract
Abstract
Emergency Departments (EDs) are the primary
access points for millions of patients seeking
medical care. The increasing patient demand and
lack of long-term dynamic planning strain the
EDs in providing timely patient care, leading to
crowding. While a well-recognized problem, ED
crowding is still prevalent, where suboptimal
resource allocation is one significant
contributing factor. In this research, we
developed an end-to-end solution that first
forecasted the patient arrivals to the partner
ED and then used an optimization model to
develop an optimal physician staffing schedule
to minimize the combined cost of patient wait
times, handoffs, and physician shifts. Finally,
the new schedule was tested using the validated
simulation model to evaluate the ED performance.
By generating shift schedules based on forecasts
and testing them in the validated simulation
model, we observed that patient time in the ED
and handoffs could be reduced by 5.6% and 9.2%
compared to current practices.
pdf
Technical Session · Healthcare and Life Sciences
Hybrid Simulation in Healthcare
Chair: Bjorn Berg (University of Minnesota)
Hybrid Models with Real-Time Data in Healthcare: A
Focus on Data Synchronization and
Experimentation
Navonil Mustafee and Alison Harper (University of
Exeter, The Business School) and Joe Viana (BI Norwegian
Business School)
Abstract
Abstract
Conventional simulation models used in
Operations Research and Management Science
(OR/MS) use historical data. With the increasing
availability of real-time data, technologies
commonly associated with applied computing, such
as Data Acquisition Systems (DAS), may need to
be integrated with conventional OR/MS models to
develop Hybrid Models (HMs). We distinguish
between HMs that use only real-time data –
we refer to them as Digital Twins (DTs) –
and those using a combination of historical and
real-time data – called Real-time
Simulation (RtS). Our previous contribution
focused on the challenges of such integration, a
concept referred to as information fusion, and
presented a conceptualization of DT/RtS. This
paper focuses on DT/RtS data synchronization and
methods that could be employed from Parallel and
Distributed Simulation (PADS). The
conceptualizations and discussions reflect on
the authors' experience implementing an RtS of a
network of Emergency Departments and Urgent Care
Centers in the UK.
pdf
Modeling and Simulation of Genomic Sequencing
Platform Operations
Jules Le Lay (Centre Léon Bérard), Vincent
Augusto and Xavier Boucher (Mines Saint-Etienne), Lionel
Perrier (Centre Léon Bérard), and Xiaolan Xie
(Mines Saint-Etienne)
Abstract
Abstract
This paper focuses on the healthcare application
field of Genomic Sequencing and addresses the
challenge of efficient organization and ramp-up
of sequencing platforms. High-throughput
sequencing platforms are currently in an
industrial prototyping phase in France for large
national deployment afterwards. In the current
state of our knowledge, there is no
scientifically established generic model nor
decision-making support at the operational level
which could guide the medical authorities in
designing organizational rules, then managing
the deployment of such platforms at the national
level. After analyzing the state of the art, a
simulation model of a genome sequencing platform
is presented, then used as a decision-making
support to manage a ramp-up situation for an
application case of a French sequencing
platform. These first results are discussed,
together with the perspective to develop a
generic model and decision-aid approach.
pdf
Technical Session · Healthcare and Life Sciences
Simulation Modeling for Infectious Diseases
Chair: Maria Mayorga (North Carolina State University)
SEAIRD Model to Simulate the Impact of Human
Behaviors
Aidan Fahlman and Gabriel Wainer (Carleton University)
Abstract
Abstract
Compartmental models have been utilized in the
study and understanding of the COVID-19
pandemic. Traditional models have been expanded
to include geographical level transmission
dynamics and new states. Here, we present a
model based on Cell-DEVS specifications that can
be used to define and study the effects of basic
human behavior. We include mask wearing and
lockdown fatigue, and an adaptable framework
allowing for the rapid prototyping of different
diseases and behaviors. We exemplify how to
build the model and adapt the attributes using
the provinces of Canada as a case study. The
results show the effect mask mandates, mask
wearing, and lockdown fatigue have on case
counts over time.
pdf
A Compartmental Simulation Model to Improve
Interventions for Controlling Poliovirus
Outbreaks
Yuming Sun, Pinar Keskinocak, and Lauren Steimle
(Georgia Institute of Technology) and Stephanie Kovacs
and Steven Wassilak (Centers for Disease Control and
Prevention)
Abstract
Abstract
Poliomyelitis (polio) is an infectious disease
that paralyzed millions of people worldwide
before polio vaccines were available. Despite
the successes of the Global Polio Eradication
Initiative, there are circulating
vaccine-derived poliovirus outbreaks that
require improved interventions. We built a
compartmental model to simulate the spread of
polio that considers mutation of the
live-attenuated virus (in the oral polio
vaccine) to evaluate the effectiveness of
interventions. We validated the model in a case
study of northern Nigeria and tested the impact
of interventions that varied in the number of
vaccination rounds and the target regions.
Results indicated that the model captures polio
dynamics by matching the case counts and their
spatiotemporal and age distributions in the
data. To stop the outbreaks, stakeholders should
conduct aggressive interventions with more
rounds and broader coverage, especially in the
under-vaccinated regions, compared to the
current practice.
pdf
Technical Session · Healthcare and Life Sciences
Healthcare Operations
Chair: Lambros Viennas (University of Surrey, Bridgnorth
Aluminium Ltd.)
Conceptual Modeling for Perishable Inventory: A Case
Study in Human Milk Banking
Marta Staff and Navonil Mustafee (University of Exeter)
and Natalie Shenker (Imperial College London)
Abstract
Abstract
The Conceptual Modeling (CM) stage of an M&S
study focuses on developing an abstraction of
the real world for subsequent implementation as
a computer model. Several studies have
acknowledged the importance of CM in the success
of simulation projects. Yet, there is a lack of
literature on applying CM frameworks to
real-world case studies, which arguably impedes
the translation of CM research into practice. In
this paper, we present the development of a
conceptual model, using Robinson’s CM
framework, for our case study investigating the
perishable product of human milk within the milk
banking supply chain. We present the application
of the various stages of the framework,
reporting on stakeholder engagement, which has
allowed us to develop a shared view of the CM.
The paper adds to the literature on CM in
practice, providing a detailed narrative on
developing a conceptual model for perishable
inventory management.
pdf
Clinical Pathway Clustering Using Surrogate
Likelihoods and Replayability Validation
William Thomas Plumb, Alex Bottle, Giuliano Casale, and
Alex Liddle (Imperial College London)
Abstract
Abstract
Modelling clinical pathways from Electronic
Health Records (EHRs) can optimize resources and
improve patient care, but current methods for
generating pathway models using clustering have
limitations including scalability and fidelity
of the clusters. We propose a novel pathway
modelling approach using Maximum Likelihood (ML)
data clustering on Markov chain representations
of clinical pathways. Our method is calibrated
to produce clusters with low inter-cluster
variability across the pathways. We use machine
learning with Stochastic Radial Basis Functions
(SRBF) kernels for surrogate optimization to
handle non-convexity and propose an incremental
optimization method to improve scalability. We
also define a methodology based on novel
replayability scores to help analysts compare
the fidelity of alternative clustering results.
Results show that our ML method produces
clusters that have higher fidelity in terms of
replayability scores than k-means based
clustering and in capturing queueing contention,
which is important for bottleneck identification
in healthcare.
pdf
Technical Session · Healthcare and Life Sciences
Applications of Simulation in Healthcare
Chair: Bjorn Berg (University of Minnesota)
A Simulation Model and Dashboard for Predicting
Covid-19 Bed Requirements
Best Contributed Applied Paper - Finalist
Yin-Chi Chan, Kaya Dreesbeimdiek, Ajith Kumar Parlikad,
and Tom Ridgman (University of Cambridge); Nicholas J.
Matheson and Ben Warne (University of Cambridge,
Cambridge University Hospitals NHS Foundation Trust);
and Denise Franks (Cambridge University Hospitals NHS
Foundation Trust)
Abstract
Abstract
The Covid-19 pandemic has placed extraordinary
amounts of stress upon public hospitals
globally. This paper describes a simulation
model for estimating hospital bed demand based
on generated scenarios. Statistical tools were
also developed for generating these scenarios,
in particular, for fitting distributions to
patients' lengths-of-stay and for predicting the
number of daily arrivals of Covid-19 patients. A
web dashboard has been created for ease of use.
The simulation model and statistical tools have
been used to estimate Covid-related bed demand
at an NHS hospital in the East of England.
pdf
Trajectory-Oriented Optimization of Stochastic
Epidemiological Models
Arindam Fadikar (Argonne National Laboratory), Mickael
Binois (Inria Centre at Université Côte
d'Azur), Nicholson Collier and Abby Stevens (Argonne
National Laboratory), Kok Ben Toh (Northwestern
University), and Jonathan Ozik (Argonne National
Laboratory)
Abstract
Abstract
Epidemiological models must be calibrated to
ground truth for downstream tasks such as
producing forward projections or running what-if
scenarios. The meaning of calibration changes in
case of a stochastic model since output from
such a model is generally described via an
ensemble or a distribution. Each member of the
ensemble is usually mapped to a random number
seed (explicitly or implicitly). With the goal
of finding not only the input parameter settings
but also the random seeds that are consistent
with the ground truth, we propose a class of
Gaussian process (GP) surrogates along with an
optimization strategy based on Thompson
sampling. This Trajectory Oriented Optimization
(TOO) approach produces actual trajectories
close to the empirical observations instead of a
set of parameter settings where only the mean
simulation behavior matches with the ground
truth.
pdf
Modeling the Potential Impact of Community Health
Volunteers in the Diagnosis and Treatment of Buruli
Ulcer
Fatumah Atuhaire, Christine S. M. Currie, and Rebecca B.
Hoyle (University of Southampton)
Abstract
Abstract
Buruli ulcer (BU) is a debilitating disease
affecting the skin, soft tissue, and bone. It is
the third most common mycobacterial disease in
humans. The mode of transmission is not fully
understood, posing challenges in prevention, and
delayed diagnosis. One effective approach to
promote early diagnosis and treatment is the
utilization of community health volunteers
(CHVs) for active case-finding. In this study,
we developed an agent-based model to investigate
the impact of CHVs in referring BU patients for
treatment. We compared the effects of two
strategies: offering self-referral alone versus
self-referral combined with CHVs, on the early
diagnosis and treatment of BU. Our findings
confirm previous knowledge that integrating CHVs
in active case-finding leads to earlier
detection of BU cases, decreasing the number of
individuals recovering with major disabilities.
pdf
Track Coordinator - Hybrid Simulation: Anastasia Anagnostou (Brunel University London), Antuela
Tako (Loughborough University)
Technical Session · Hybrid Simulation
Hybrid Simulation for Supply Chain Management
Chair: Anastasia Anagnostou (Brunel University London)
Hybrid Discrete-Event Simulation with Repeated
Machine Learning Prediction-Based Quality Inspection
of Inbound Distribution Center Deliveries
Joost R. Remmelts and Alexander Hübl (University of
Groningen)
Abstract
Abstract
Business-to-business distributors deem it
necessary to inspect the quality of inbound
deliveries to their distribution centers. This
paper observes a company that experiences an
inefficient quality inspection and wishes to
improve the process. The broader
product-receiving process is under-researched in
warehousing literature but possesses
similarities with manufacturing quality control.
The paper aims to extend prediction-based
quality inspection to the warehousing field. It
applies a hybrid model, combining discrete-event
simulation and machine learning multi-label
classification to decrease the required
inspection volume and evaluate its effects on
the ability of the inspection and the workload
and costs of distribution center operations. The
results show that the inspection volume can
drastically be decreased, reducing the workload
and costs at the expense of the inspection
capability of infrequently occurring delivery
quality flaws in training data. The
configuration of the classification model
determines the degree of inspection volume
reduction and wrongly predicted delivery quality
flaws.
pdf
A Hybrid System Dynamics/Input-Output Model for
Studying the Impact of Transportation Delays on the
Resilience of National Supply Chains
William Steven Bland, Lissette Escobar, Andrew Hong,
Grace Kenneally, A.J. Liberatore, and Scott Rosen (MITRE
Corporation)
Abstract
Abstract
In today’s globally interconnected
economy, transportation delays that impact a
specific industry’s supply chain can
quickly propagate to other industries,
dramatically impacting inventory levels and
economic production on the local, state,
national, and global levels. This research
proposes a hybrid System Dynamics and
Input-Output simulation model that represents
the impact of transportation delays on the flow
of goods across industries and between
geographic regions. The model is applied to a
case study involving the port of Los Angeles to
quantify the direct and indirect effects of a
30- and 60-day delay in container movement on
gross output across the 55 major industries in
the United States. The capability to predict the
scope and scale of the economic impact resulting
from various transportation delays provides
decision makers the opportunity to conduct
preliminary what-if analyses which can support
the development of potential mitigation
strategies before the actual shock occurs.
pdf
Evaluating the Effectiveness of Countermeasures in
ICT Supply Chains through Elicitation-Informed
Simulation
Rong Lei, Samar Saleh, Weihong Grace Guo, Elsayed
Elsayed, and Fred Roberts (Rutgers, The State University
of New Jersey) and Paul Kantor (Paul B Kantor,
Consultant)
Abstract
Abstract
Counterfeiting, the production of imitation
goods, is a critical threat in the Information
and Communication Technology (ICT) manufacturing
supply chain (SC). Countermeasures (CMs) are
strategies to mitigate disruptions and enhance a
SC. We present a novel hybrid approach for
assessing and selecting CMs in ICT SCs. Our
model incorporates insights from subject matter
experts (SME), via Delphi elicitation, into the
simulation. This technique is used to study SC
resilience against disruptions caused by
counterfeiting. ICT is an integral part of our
daily lives and life-supporting systems, making
resilience against such threats vital. Using
performance criteria including system service
levels, delivery time, and product quality, our
findings show the importance of integrating
expert knowledge in simulation and the
effectiveness of certain CMs.
pdf
Technical Session · Hybrid Simulation
Hybrid Simulation in Manufacturing
Chair: Fernando Barros (University of Coimbra)
Design of a Serious Game for Safety in Manufacturing
Industry Using Hybrid Simulation Modeling: Towards
Eliciting Risk Preferences
Hanane El Raoui and John Quigley (University of
Strathclyde), Ayse Aslan and Gokula Vasantha (Edinburgh
Napier University), Jack Hanson and Jonathan Corney
(Edinburgh University), and Andrew Sherlock (National
Manufacturing Institute Scotland/ University of
Strathclyde)
Abstract
Abstract
Conventional methods used to elicit risk-taking
preferences have demonstrated significant
disparities with real-world behaviours,
compromising the validity of the data collected.
Serious gaming (SG) provides a high potential to
bridge this gap. This paper presents a serious
game as a novel approach to elicit
risk-preference in an industrial manufacturing
context, focusing on the game-design and
implementation using hybrid simulation
modelling. The developed SG serves as a tool for
conducting incentivized experiments aimed at
assessing human behaviour towards risk, to
inform policy recommendations. The game
incorporates two influential factors in shaping
risk-taking behaviour in a manufacturing
environment, namely the social learning and
production pressure, and use a variety of game
mechanics to promote the players’
motivation and engagement. A usability study was
conducted with 10 participants using the
Usability Scale System (SUS), to identify
problems in the usability of the game. Results
have shown that our game has a good usability.
pdf
Hybrid Simulation of Product Reconditioning: A Case
Study
Sean McConville (Air Force Institute of Technology,
University of North Texas); Suman Niranjan and
Arunachalam Narayanan (University of North Texas); and
Joseph Murray (Dayblink Consulting)
Abstract
Abstract
To gain economic competitive advantage from the
closed loop supply chain (CLSC), firms must
ensure that the cost of reconditioning products
does not exceed the cost of purchasing new
products. The uncertainties associated with
product returns (i.e., product condition,
quantity etc.) make it difficult for managers to
efficiently allocate resources. This study
develops and employs a hybrid simulation (HS)
model as a decision support tool in a case study
from industry. We demonstrate via our HS that
the company could save significant money each
quarter by converting their existing schedules
from two shifts to single shifts and
redistributing resources. Furthermore, we found
maximizing the subprocess output doesn't
necessarily reduce costs. The company's focus on
output-oriented subprocess evaluation could
impede cost-saving efforts. Future research will
explore how the mix of new and returned items
affects process yield, different resource
configurations, prioritization of product types,
and processing time disparities.
pdf
Virtual Planning of a Metal Additive Manufacturing
Factory Using Techno-Economic Hybrid Simulation
Models
Eldar Shakirov, Haden Quinlan, and A. John Hart
(Massachusetts Institute of Technology)
Abstract
Abstract
Factory simulation can guide leaner production
operations and resilient supply chains by
informing capital allocation and real-time
decision-making. This is especially true for
emerging production methods, like additive
manufacturing (AM), where a lack of expertise
and relative technological novelty make it
difficult to quantitatively assess technology
economics across applications. While reported
cost models provide detailed analysis on the AM
printing process, accurate modeling requires
specific evaluation of process-level and
production-level considerations that
significantly impact factory dynamics and cost.
Advances in factory simulation modeling
therefore promise the development of
comprehensive and actionable cost models. This
paper reviews progress in simulation-based
costing, hybrid simulation, and automated model
generation, and proposes an integrated approach
for cost modeling using an AM-based factory. We
demonstrate the feasibility of this approach by
simulating the production of two common AM part
geometries, and evaluate the associated cost and
time performances of different factory
configurations.
pdf
Technical Session · Hybrid Simulation
Hybrid Simulation Methodology
Chair: Steffen Strassburger (Technische Universität
Ilmenau)
Choosing the Right Entity Size to Minimize
Discretization Error in Discrete Event Simulation
Models
Leonardo Chwif (IMT), Wilson Pereira (Simulate), and
José Arnaldo Barra Montevechi (Federal University
of Itajubá)
Abstract
Abstract
In discrete-event simulation models, the way we
establish the relationship between a real-world
object and the model entity (a single
indivisible object flowing through the model) is
crucial to some classes of problems due to
possible computational unfeasibility. In
addition, the entity size also relates to
results accuracy and simulation running time - a
subject barely explored in the literature. In
this paper, these questions were investigated
through case studies which supported our initial
hypothesis about the general relationships
involved. Then, a simple algorithm was developed
for correctly choosing the best entity size to
provide the desired accuracy, measured as a
discretization error, with promising results.
The limitations of the algorithm are addressed
and some directions for future research are
pointed.
pdf
How Not to Visualize Your Simulation Output
Data
Jonas Genath (Ilmenau University of Technology) and
Steffen Strassburger (Technische Universität
Ilmenau)
Abstract
Abstract
Hybrid modeling and simulation studies combine
well-defined methods from other disciplines with
a simulation technique. Especially in the area
of output data analysis of simulation studies,
there is great potential for hybrid approaches
that incorporate methods from machine learning
and AI. For their successful application, the
analytical capabilities of machine learning and
AI must be combined with the interpretive
capabilities of humans. In most cases, this
connection is achieved through visualizations.
As methods become more complicated, the demands
on visualizations are increasing. In this paper,
we conduct a data farming study and delve into
the analysis of the result data. In doing so, we
uncover typical errors in visualizations making
the interpretation and evaluation of the data
difficult or misleading. We then apply the
concepts of visual analytics to these
visualizations and derive general guidelines to
help simulation users to analyze their
simulation studies and present results
unambiguously and clearly.
pdf
Approximate Discrete-Event Method for Supervisory
Control
Maaz Jamal and Gabriel Wainer (Carleton University)
Abstract
Abstract
Supervisory systems are used to and act when
certain events are detected. Studying
supervisory using formal Discrete Event
Modelling & Simulation allows analyzing an
application and then using the model to build
the controllers. Supervisors can lead to a state
space explosion if the model size increases,
thus, reducing the state space complexity can
expand the practicality of the model. We present
a method based on Discrete Event System
Specifications using an approximate method that
reduces the state space complexity. The plant
models and synthesized controllers can then be
deployed on embedded hardware providing model
continuity. We discuss the method and present a
case study of a supervisory system.
pdf
Technical Session · Hybrid Simulation
Hybrid Simulation Applications I
Chair: Navonil Mustafee (University of Exeter, The
Business School)
Smart Sports Predictions via Hybrid Simulation: NBA
Case Study
Ignacio Erazo (Georgia Institute of Technology)
Abstract
Abstract
Increased data availability has stimulated the
interest in studying sports prediction problems
via analytical approaches; in particular, with
machine learning and simulation. We characterize
several models that have been proposed in the
literature, all of which suffer from the same
drawback: they cannot incorporate rational
decision-making and strategies from
teams/players effectively. We tackle this issue
by proposing hybrid simulation logic that
incorporates teams as agents, generalizing the
models/methodologies that have been proposed in
the past. We perform a case study on the NBA
with two goals: i) study the quality of
predictions when using only one predictive
variable, and ii) study how much historical data
should be kept to maximize prediction accuracy.
Results indicate that there is an optimal range
of data quantity and that studying what data and
variables to include is of extreme importance.
pdf
Simulation Model to Forecast Gender Pension Wealth
Gap in the Light of Demographic Changes
Bożena Mielczarek (Wroclaw University of Science
and Technology)
Abstract
Abstract
The ageing of the population has forced changes
in many areas of social policy, including
pension systems. Countries are reforming their
retirement policies in such a way that the size
of pension benefits depends on the total period
of employment, contributions made, and life
expectancy. Due to the fact that in these types
of system, employment plays a significant role
in the accumulation of pension capital, a gender
pay gap translates into a gender pension gap. In
this article, we propose a hybrid simulation
model to analyze the impact of long-term
economic and demographic changes on the level of
pension benefits when a worker retires, with a
special focus on gender wealth pension gaps. The
model combines demographic simulation conducted
using a systems dynamics approach with discrete
stochastic simulation by means of which we model
the employment history of men and women. The
model uses data from Polish statistical
databases.
pdf
Hybrid Simulation in Construction
Masoud Fakhimi (University of Surrey); Navonil Mustafee
(University of Exeter, The Business School); and Tillal
Eldabi (University of Bradford)
Abstract
Abstract
Hybrid Simulation (HS) is the application of
multiple simulation techniques, for example,
Discrete-event, Agent-based and System Dynamics,
in the context of a single simulation study. HS
is a growing area of research; numerous papers
have delved into conceptualizations, frameworks,
and case studies applied to specific application
domains. The focus of our paper is on the
construction domain. Through a systematic
methodology for literature assessment, it
presents a synthesis of the existing literature,
providing insights on the choice of simulation
technique, the context of its application, and
the level of implementation, among others.
Through an in-depth review of 36 relevant papers
published over the past two decades, we
contribute to a comprehensive understanding of
the current state-of-the-art in HS as applied to
Construction. The results of our investigation
underscore the immense potential of HS in
construction, with broad applicability spanning
diverse areas such as structural analysis and
building performance evaluation.
pdf
Technical Session · Hybrid Simulation
Hybrid Simulation Applications II
Chair: Tillal Eldabi (University of Bradford)
Simulating Technician Populations with Tandem
Analytic and Discrete Event Models
George Ryan Ambrose and Francois Alex Bourque (Defence
Research and Development Canada)
Abstract
Abstract
Military workforce modelling is typically
limited to either a series of analytic
equations, or a simulation model. However,
developing two such models in tandem has the
benefit of cross-validation as well as the
opportunity to explore problem space not easily
accessed by a single approach. In particular,
business rules for force employment are not
easily described by closed-form equations while
simulation models require exceedingly large
computational resources to reach the asymptotic
behaviour provided by analytic equations. This
work leverages the benefits of both approaches
to describe the population and career trends of
technician individuals. As this career tends to
have well defined training requirements, hence
clear delineation between semi-functional
apprentices and fully-functional journeymen, it
is well suited to population modelling. Notional
distributions for career parameters are assumed
and the results for career progression and fleet
readiness are compared.
pdf
πHyFlow: A Modular Process Interaction
Worldview
Fernando Barros (University of Coimbra)
Abstract
Abstract
Worldviews play a central role in M&S providing
the basic constructs to describe simulation
models. Three main worldviews have been defined:
event scheduling, activity scanning, and process
interaction (PI). The latter has been described
in two flavors, one centered in the network of
resources and other in the transitory
transactions that flow in the network. In this
paper we present a new M&S approach based on the
πHYFLOW formalism that combines network and
transaction PI, while keeping the support for
modular and hierarchical models. We demonstrate
πHYFLOW expressiveness by representing a
hybrid production unit with a variable number of
machines subjected to breakdowns. The hybrid
model combines a fluid queue describing the
work-in-progress, with discrete events modeling
machines arrivals, departures, and breakdowns.
Arrivals and departures of machines are achieved
through modular communication, enabling model
composition with other πHYFLOW components.
pdf
Logistics Supply Chains Transportation
Technical Session · Logistics Supply Chains Transportation
Automated Vehicles
Chair: Carles Serrat (Universitat Politècnica de
Catalunya-BarcelonaTECH)
Simulating and Evaluating Internal Logistics
Strategies for Suppliers in Just-in-Sequence Supply
Systems in the Automotive Industry
Helen Christina Sand, Marvin Auf der Landwehr, and
Christoph von Viebahn (Hochschule Hannover)
Abstract
Abstract
The reliability of just-in-sequence supply
systems depends to a large extent on the
efficiency of a supplier’s internal
logistics distribution system. Thus, improving
the logistics efficiency is a major objective
for many suppliers in the automotive industry.
In this paper, a discrete event simulation model
is developed to evaluate the operational
implications of different logistics strategies
in just-in-sequence supply systems. Building
upon the case of a major automotive supplier
from Germany, the implications of various
transportation resources and routing approaches
are investigated and analyzed when it comes to
the supply of components from an internal
warehouse to the assembly lines. Experimental
results show that the combined,
load-carrier-specific use of forklifts, pallet
trucks and tugger trains holds a high potential
to achieve more efficient supply operations and
meet different operational performance criteria
such as downsizing the vehicle fleet, improving
supply reliability and punctuality at the
assembly lines, or minimizing warehouse traffic.
pdf
Route Selection in Mixed Fleet Warehouses
Anna Rotondo (Irish Manufacturing Research)
Abstract
Abstract
Warehouse systems are progressively shifting
towards mixed fleet models where automated and
manually operated vehicles work together sharing
the same floorspace. This is posing
communication and co-ordination challenges from
both a design and an operational perspective.
Mixed fleet co-ordination is particularly
challenging from a traffic control viewpoint due
to the erratic behavior that human drivers may
exhibit. In this work, an optimisation framework
that aims at selecting the optimal route among
candidate ones in a mixed fleet warehouse
environment is developed. More specifically, the
foundational deterministic components of the
framework are described and an interactive
dashboard used for verification purposes is
presented. The development work of the
stochastic component and the simulator is still
ongoing. Initial feedback based on virtual
testing conducted by an industrial partner
suggests that a static optimisation approach
based on historical traffic information may not
lead to optimal choices when the human behavior
is neglected.
pdf
Modeling Autonomous Vehicle-Targeted Aggressive
Merging Behaviors in Mixed Traffic Environment
JongIn Bae (Georgia Institute of Technology), Abhilasha
Jairam Saroj (Oak Ridge National Laboratory), Wonho Suh
(Hanyang University), and Michael P. Hunter and
Angshuman Guin (Georgia Institute of Technology)
Abstract
Abstract
Advances in Autonomous Vehicle (AV) technology
has fueled industry and research fields to
dedicate significant effort to the study of the
integration of AVs into the traffic network.
This study focuses on the transition phase
between all Human Driven Vehicles (HDVs) in the
network to all AVs, where these different
vehicle types coexist in a mixed traffic
environment. This paper investigates the
potential impacts of aggressive merging
behaviors by human drivers on traffic
performance in a mixed environment. For this,
three vehicle types – AVs, HDVs, and
Aggressive HDVs (AHDVs) are modeled in an
open-source microscopic traffic simulation
model, SUMO. In the developed simulation, the
AHDVs are modeled to emulate aggressive merging
behaviors in front of AVs at a merge section of
a freeway exit ramp. Several experiments are
used to study the impact of such behavior.
Results show travel-time gains by AHDVs at the
expense of AVs and HDVs.
pdf
Technical Session · Logistics Supply Chains Transportation
New Approaches
Chair: Canan Gunes Corlu (Boston University)
Estimating Parameters with Data Farming for
Condition-Based Maintenance in a Digital Twin
Alexander Wuttke, Joachim Hunker, and Markus Rabe (TU
Dortmund University) and Jan-Philipp Diepenbrock (IVA
Schmetz GmbH)
Abstract
Abstract
Nowadays, vast amounts of data can be collected
by sensors and used for data-driven approaches.
Digital twins provide a framework to exploit
these data for solving various issues. For many
companies in the industrial sector, machine
maintenance is a significant issue. Maintenance
is essential for high overall equipment
efficiency, but it can also be costly.
Therefore, it should only be performed when
necessary, based on the machine’s
condition. Condition monitoring is used to
assess a machine’s condition periodically,
allowing for condition-based maintenance. In
this paper, a simulation-based approach for
parameter estimation is presented that
contributes to condition-based maintenance. It
introduces condition indicators for certain
features of machines and demonstrates how to
evaluate them using data farming, which employs
simulation models as data generators.
Additionally, the implementation of this
approach in digital twins is discussed.
pdf
Approach for Classifying the Automatability of
Verification and Validation Techniques
Katharina Langenbach and Markus Rabe (TU Dortmund
University)
Abstract
Abstract
Simulation is a proven method in industry and
research to constitute the basis for further
decisions. Therefore, the credibility of its
results is of major importance. Generally,
simulation studies are guided by procedure
models comprised of several phases with specific
results. To assess the credibility, verification
and validation (V&V) is used by applying V&V
techniques to these phase results, which
requires significant effort. Additionally, the
amount of processed data increases and there is
a growing desire for real-time-adjustable
models, increasing the effort required for V&V
while reducing the time available. One way to
address these challenges is to automate V&V. For
this purpose, the notions of automation and
associated automation levels have to be
transferred to the domain of V&V in order to
assess and classify the automatability of
individual V&V techniques. The effort for
application of V&V techniques can be reduced
while keeping or increasing the credibility of
simulation.
pdf
A Simulation-Based TDABC Model to Manage Supply Chain
Costing: A Case Study
Siham Rahoui, John Crowe, and Amr Mahfouz (Technological
University Dublin)
Abstract
Abstract
Effective management of supply chain costing is
crucial for decision-making during times of
disruption. It provides accurate cost
indicators, enabling organizations to adapt to
the risks of disruptions and mitigate their
adverse effects. Supply chain costing literature
has shown that traditional cost accounting
approaches are inadequate in addressing the
dynamic and complex nature of supply chain
performance and the nonlinear behavior of the
involved processes. Consequently, this paper
presents a simulation-based supply chain costing
framework that integrates discrete event
simulation and time-driven activity-based
costing to explore the dynamics of management
accounting tools in a real context with all
their complexities and interdependencies. The
framework will be applied to the logistics
function of an automotive supply chain to
demonstrate the applicability of a static versus
a dynamic time-driven activity-based costing
model, their suitability to reflect the real
operational performance of the supply chain and
suggest ways to improve it.
pdf
Technical Session · Logistics Supply Chains Transportation
Freight and Complex Supply Chains
Chair: Xueping Li (University of Tennessee)
A Deep Q-Network Based on Radial Basis Functions for
Multi-Echelon Inventory Management
Liqiang Cheng and Jun Luo (Shanghai Jiao Tong
University), Weiwei Fan (Tongji University), and Yidong
Zhang and Yuan Li (Alibaba)
Abstract
Abstract
This paper addresses a multi-echelon inventory
management problem with a complex network
topology where deriving optimal ordering
decisions is difficult. Deep reinforcement
learning (DRL) has recently shown potential in
solving such problems, while designing the
neural networks in DRL remains a challenge. In
order to address this, a DRL model is developed
whose Q-network is based on radial basis
functions. The approach can be more easily
constructed compared to classic DRL models based
on neural networks, thus alleviating the
computational burden of hyperparameter tuning.
Through a series of simulation experiments, the
superior performance of this approach is
demonstrated compared to the simple base-stock
policy, producing a better policy in the
multi-echelon system and competitive performance
in the serial system where the base-stock policy
is optimal. In addition, the approach
outperforms current DRL approaches.
pdf
Simulation-based Cost Modeling to Measure the Effect
of Automated Trucks in Inter-terminal Container
Transportation
Ann-Kathrin Lange, Johannes Hinckeldeyn, Hendrik Rose,
Nicole Nellen, and Michaela Grafelmann (Hamburg
University of Technology)
Abstract
Abstract
Container transports within ports are
characterized by mostly manual trucks and many
handling operations in relatively small areas.
Accordingly, they incur a disproportionately
large cost in maritime transport chains. One way
to reduce these costs is to use automated trucks
in a port-internal transport system. Such
systems have only been used on terminals, but
not within whole ports. Thus, it is important to
determine the design parameters of such
transport systems. Discrete-event simulation is
particularly suitable for investigating planned
systems and controls in logistics. However, the
costs of such systems are usually neglected.
Therefore, a simulation-based cost model is used
in this study to determine the
cost-effectiveness of automated trucking
systems. It is shown which factors possess the
greatest influence on the cost-effectiveness of
port-internal container transports. Furthermore,
it can be estimated for the first time which
cost savings can be achieved by using automated
trucks for port-internal container transports.
pdf
Large Scale Logistics Network Simulation and Its
Application in JD Logistics
Sheng Liu (Institute of Automation) and Xiaotian Zhuang,
Liang Yan, Yu Wang, and Shengnan Wu (Jingdong Logistics)
Abstract
Abstract
This paper proposes a large-scale logistics
network simulation method to reduce package
delivery delay and package loss caused by the
sudden increase of package transportation demand
during large-scale promotion activities such as
11.11 and 6.18. We develop a large-scale
logistics network simulation software for a
large logistics enterprise. According to its
actual logistics network, we establish its
equivalent virtual logistics network in the
simulation software. Then we simulate and adjust
the virtual logistics network in advance. At
last we regulate the actual logistics network
according to the virtual network. As a result,
we reduce the transportation time, the
transportation distance, and the transportation
costs for the logistics enterprise. The
simulation software can complete the simulation
of 500 million package distribution of a month
in less than 30 minutes on a personal computer.
pdf
Technical Session · Logistics Supply Chains Transportation
Hybrid Models
Chair: Sahil Belsare (Walmart, Inc. USA; Northeastern
University)
An Integrated System Dynamics and Discrete Event
Supply Chain Simulation Framework for Supply Chain
Resilience with Non-stationary Pandemic Demand
Mustafa Camur (GE Research); Chin-Yuan Tseng (Georgia
Institute of Technology); Aristotelis E. Thanos (GE
Research); Chelsea C. White (Georgia Institute of
Technology); Walter Yund (GE Research); and Eleftherios
Iakovou (Texas A&M University, Texas A&M Energy
Institute)
Abstract
Abstract
COVID-19 resulted in some of the largest supply
chain disruptions in recent history. To mitigate
the impact of future disruptions, we propose an
integrated hybrid simulation framework to couple
nonstationary demand signals from an event like
COVID-19 with a model of an end-to-end supply
chain. We first create a system dynamics
susceptible-infected-recovered (SIR) model,
augmenting a classic epidemiological model to
create a realistic portrayal of demand patterns
for oxygen concentrators (OC). Informed by this
granular demand signal, we then create a supply
chain discrete event simulation model of OC
sourcing, manufacturing, and distribution to
test production augmentation policies to satisfy
this increased demand. This model utilizes
publicly available data, engineering teardowns
of OCs, and a supply chain illumination to
identify suppliers. Our findings indicate that
this coupled approach can use realistic demand
during a disruptive event to enable rapid
recommendations of policies for increased supply
chain resilience with controlled cost.
pdf
Integrating a Mode Choice Model into Agent-based
Simulation for Freight Transport Planning and
Decarbonization Analysis
Senlei Wang, Dhanan Sarwo Utomo, and Philip Greening
(Heriot-Watt University)
Abstract
Abstract
This paper presents a framework for integrating
a discrete mode choice model with agent-based
simulation. The integrated framework provides a
more realistic representation of long-haul
freight transport and is applied to the
real-world scenarios of moving freight from
ports to inland destinations via road, rail, and
inland waterways. It incorporates a mode choice
component that captures demand shifts between
modes in response to different different policy
and vehicle technology interventions. The
objective is to investigate the financial and
environmental impacts of introducing new vehicle
technologies and associated energy sources under
different future scenarios in a UK multimodal
freight system.
pdf
Technical Session · Logistics Supply Chains Transportation
Production Planning
Chair: Katharina Langenbach (TU Dortmund University)
Improving Buffer Storage Performance in Ceramic Tile
Industry via Simulation
Marco Taccini (University of Modena and Reggio Emilia);
Giulia Dotti (University of Modena and Reggio Emilia,
Marco Biagi Foundation); Manuel Iori (University of
Modena and Reggio Emilia); and Anand Subramanian
(Universidade Federal da Paraíba)
Abstract
Abstract
This study aims at identifying the best strategy
to temporarily store products within a buffer
area in an Italian ceramic tile company. The
storage policy is analyzed to maximize the
storage capacity, facilitate operators'
activities, and, consequently, improve the
warehouse logistics performance. A discrete
event simulation was conducted using Salabim, a
Python based open-source software, in order to
determine the best policy. We compare the
performance of the current storage policy, based
on technical production properties of products,
and a newly proposed one, based on products'
downstream destination. The results suggested
that the proposed strategy significantly
improves the performance of the buffer area
management. The approach can be applied to
different applications, contributing to the
literature on simulation-based decision-making
in material management. Furthermore, the study
provides a functional case study showing the
potential and achievable results of Salabim for
modeling complex systems.
pdf
Simulating the Impact of Forecast related Overbooking
and Underbooking Behavior on MRP Planning and a
Reorder Point System
Wolfgang Seiringer and Klaus Altendorfer (University of
Applied Sciences Upper Austria) and Thomas Felberbauer
(University of Applied Sciences St. Pölten)
Abstract
Abstract
Production Planning and its parameterization is
critical to fulfil customer demands and to
successfully react on changes in high volatile
markets. Therefore, demand updates should be
considered to improve production planning. In
this paper the performance of two production
planning methods MRP (Material Requirements
Planning) and RPS (Reorder Point System) are
compared in a multi-item single stage system
where customer orders are updated in a rolling
horizon manner. Applying a simulation study, we
investigate the performance of MRP and RPS for
biased and unbiased forecast information and
discuss the difference in the optimal planning
parameters. The study shows that for a
production system with underbooking and low
demand uncertainty, RPS method is superior, in
all other scenarios MRP outperforms RPS. For
overbooking scenarios, the results show that MRP
leads to overall cost improvements ranging from
8% to 30%.
pdf
Pick Order Assignment and Order Batching Strategy for
Robotic Mobile Fulfilment System Warehouse
Shuo-Yan Chou, Aisyahna Nurul Mauliddina, Anindhita
Dewabharata, and Ferani Eva Zulvia (National Taiwan
University of Science and Technology)
Abstract
Abstract
This study aims to optimize the order
fulfillment process in a Robotic Mobile
Fulfilment System warehouse by improving the
order batching and the pick order assignment in
order-picking activities using a simulation
approach. The order-to-station assignment
considers the association between the new order
and the in-progress order at the station instead
of random assignment. The proposed model aims to
maximize the total throughput, maximize the
pile-on value, and minimize the required number
of pods. The proposed model is compared with a
baseline scenario. The result shows that the
proposed model significantly decreases the
number of required pods by 40%, increases the
pile-on by 60%, and increases the throughput by
4%. This result proves that the proposed
strategy can improve the efficiency of the
order-picking process by ensuring every order
and/or batch of orders always goes to the
picking station with the most similar order.
pdf
Technical Session · Logistics Supply Chains Transportation
Risks and Resilience
Chair: Joachim Hunker (Technische Universität
Dortmund)
A Supply Chain Resilience Case Study Linking Key
Resilience Areas with Process Mining
Frank Schätter, Florian Haas, and Frank Morelli
(Pforzheim University of Applied Sciences)
Abstract
Abstract
At a time when supply chain disruptions are on
the rise, supply chain managers are often
overwhelmed by a simple question: How resilient
is my supply chain and how can the status quo be
improved? We present a case study of a
manufacturing company in Central Europe that
uses a two-step approach to help managers answer
these questions. In the first stage, Key
Resilience Areas (KRAs) are applied to
transactional data to identify critical elements
of the supply chain and their potential impacts.
In the second stage, process mining is used to
analyze the root causes of the identified
impacts. In the case study, we reveal vulnerable
locations and relevant product characteristics
of the material flows of the company's inbound
network, and process mining is used to analyze
why, for example, a single sourcing strategy was
chosen for a critical supplier.
pdf
Conceptualizing Resilience in Supply Chain
Simulation
Simon Taylor, Anastasia Anagnostou, and Kate Mintram
(Brunel University London) and Ed Hua, Andreas Tolk,
Mark Pfaff, and David Mendonca (MITRE Corporation)
Abstract
Abstract
Supply chains (SCs) collaborate in production
and consumption across the world. SC management
techniques attempt to optimize and balance
supply chain operations. SC simulation can help
support this by exploring “what-if”
scenarios across key performance indicators,
particularly when SCs are subject to potentially
disruptive events. Resilience is the capacity
for an enterprise to survive, adapt, and grow in
the face of turbulent change. Change engenders
SC vulnerabilities and management control
attempts to create SC capabilities to address
them. We are investigating the feasibility of
creating a generic SC Simulation framework that
represents sources of vulnerability and
resilience and allows decision makers to explore
potential capabilities to address them. This
article reports progress on the first step of
this study towards the creation of a conceptual
model of SC resilience.
pdf
Building and Operating Resilient Transportation Yards
Using Simulation
Hafsa Binte Mohsin, Jae Yong Lee, and Vamshi Krishna
Suvarna (Amazon)
Abstract
Abstract
Developing a comprehensive model is an effective
approach for gaining insight into and analyzing
complex systems such as transportation yards.
Following this approach, a data-driven
agent-based simulation model has been developed
for transportation yards at Amazon which
captures the features and processes of the
system. By simulating different scenarios and
using simulation output performance indicators
like yard/parking slip/dock door utilization,
entry/exit gate queue, and late departure count,
this model helps to identify potential
bottlenecks, inefficiencies, and risks in the
system. This information is used for strategic
decision making and/or improving the system.
Furthermore, the user can find ways to increase
the yards’ daily maximum volume process
capacities through multiple
‘what-if’ scenarios. This model is
performed with mean absolute error (MAE) and
root mean square error (RMSE) of 6% and 7%
respectively. This paper presents the overview,
current use cases and future works for
improvement of the simulation model.
pdf
Technical Session · Logistics Supply Chains Transportation
Traffic Simulation
Chair: Dave Goldsman (Georgia Institute of Technology)
Optimizing Arterial Traffic Signal Settings: Shotgun
Version for Simultaneous Perturbation Stochastic
Approximation Approach
Yen-Hsiang Chen and Michael Franciudi Hartono (National
Taiwan University)
Abstract
Abstract
The recent advancement in hardware computation
speed has allowed stochastic microscopic traffic
simulators to be embedded in signal optimization
systems. In this study, stochastic perturbation
simulation approximations (SPSA), an efficient
difference-typed gradient-based searching, has
been applied in the signal solver of a signal
optimization system due to (i) its lower
required total number of replications and (ii)
the capability to conduct variance reduction
technique (VRT). The case study has shown that
the objective value, in terms of road
users’ delay, indeed improves over
iterations. Since the gradient-based method may
be trapped in the local optimal, this study has
further applied the shotgun mechanism that
allows better solutions in the subject stage to
proceed to the next stage. By further offering
the shotgun process, the quality of the solution
can be further improved.
pdf
Breaking Through the Traffic Congestion: Asynchronous
Time Series Data Integration and XGBOOST for Accurate
Traffic Density Prediction
Eloi Garcia, Carles Serrat, and Fatos Xhafa (Universitat
Politècnica de Catalunya-BarcelonaTECH)
Abstract
Abstract
The proliferation of data collection from smart
cities has resulted in an exponential growth in
the volume of measurements available for
analysis. However, collecting all parameters
concurrently at the same location is not
feasible due to the complex nature of the real
world. We present an innovative methodology that
enriches asynchronous time series data from a
variety of sources to facilitate data enrichment
and city-wide behavior simulation. A case study
on OpenDataBCN attests to the efficacy of this
approach via an XGBoost model, predicated on
geographical coordinates and timestamp
disparities. The consolidation of data from
different sources improves the richness and
granularity of information at disposal for
analysis, thereby revealing previously hidden
patterns and relationships, exhibiting new
insights and underscoring the potential of this
methodology for sustainable and efficient data
enrichment processes as well as new
possibilities for simulation based on smart city
datasets.
pdf
Technical Session · Logistics Supply Chains Transportation
Simheuristic Approaches
Chair: Michael Kuhl (Rochester Institute of Technology)
A Dynamic Forecast Demand Scenario Analysis to Design
an Automated Parcel Lockers Network in Pamplona
(Spain) Using a Simulation-Optimization Model
Irene Izco (Public University of Navarre); Adrian
Serrano-Hernandez and Javier Faulin (Public University
of Navarre, Institute of Smart Cities); and Bartosz
Sawik (AGH University of Science and Technology)
Abstract
Abstract
The disruptions experienced by the last mile
delivery processes during the SARS-CoV-2
pandemic have inevitably raised the dilemma of
alternative last mile approaches in Urban
Logistics (UL). Self-Collection Delivery Systems
(SCDS) suppose an improvement for both courier
companies and customers, providing flexibility
of time-windows and reducing overall mileage,
delivery time and, gas emissions. Drawing a
distinction from previous works involving hybrid
modeling for automated parcel lockers (APL)
network design, this study integrates a System
Dynamics Simulation Model (SDSM) to forecast
e-commerce demand in Pamplona (Spain), and
considers the scalability of the model for other
cities. A bi-criteria Facility Location Problem
(FLP) is proposed and solved with an
ε-constraint method, where ε is
defined as the level of coverage of the total
demand, and four different cases of demand
coverage are run. The simulation and demand
forecast was carried out using Anylogic
software, being CPLEX the optimization solver.
pdf
A Demand Modeling Pipeline for an Agent-Based Traffic
Simulation of the City of Barcelona
Jonas Fuentes Leon (Universitat Oberta de Catalunya,
Spindox Spain); Francesca Giancola (Spindox S.p.A.;
DIAG, Sapienza University of Rome); and Andrea
Boccolucci and Mattia Neroni (Spindox S.p.A.)
Abstract
Abstract
The growth of urban population and the
proliferation of mobility options in big cities
are adding to the complexity of comprehending
how people move about and how efficiently they
do it. Understanding how traffic patterns change
throughout the day is essential for legislators,
public administrations, and other stakeholders,
as it has a direct impact on citizens' quality
of life by, for instance, increasing greenhouse
gas emissions and noise pollution. In this
context, simulation becomes an essential tool
for grasping the emerging dynamics of urban
transportation, citizens' mobility patterns, and
traffic flow bottlenecks. This work presents a
complete data modelling pipeline for generating
the population, network and transportation
demand that is fed to a multi-modal traffic
simulation of the city of Barcelona using MATSim
and open-access statistical data sources. The
model is calibrated, the results are obtained,
and future applications of the developed tool
are outlined.
pdf
Technical Session · Logistics Supply Chains Transportation
Yard Management
Chair: Klaus Altendorfer (Upper Austrian University of
Applied Science)
Cloud-Based Hybrid Simulation Model For Optimizing
Warehouse Yard Operations
Mohammed Farhan, Pascalin Ngoko, Farouq Halawa, and
Raashid Mohammed (Amazon)
Abstract
Abstract
Fulfillment centers in the E-commerce industry
are highly complex systems that houses inventory
and fulfill customer orders. One of the key
processes at these centers involves translating
customer demands into trucks and yard
operations. Truck yards with operational issues
can create delays in customer orders. In this
paper, we show how a scalable cloud-based hybrid
simulation model is used to improve yard
operations, optimize flow and design, and
forecast yard congestion. Cloud experimentation
along with automated database connectivity
allows any user to run simulation analyses to
derive data driven operational decisions. We
tested the model on two real world case studies,
which results in cost savings for the
organization. This paper also proposes a robust
automated framework for setting simulation
validation benchmarks and measuring model
accuracy.
pdf
Simulation-Based Analysis of Improvements in Vehicle
Routing with Time Windows Using a One-sided VCG
Mechanism for the Reallocation of Unfavorable Time
Windows
Felix Roeper and Ralf Elbert (Technische
Universität Darmstadt)
Abstract
Abstract
In road freight transport, booking unfavorable
time windows (TW) through time window management
systems (TWMS) for loading or unloading trucks
at the loading dock often leads to avoidable
long tours. Therefore, this paper investigates,
based on an agent-based simulation framework,
the efficiency gains and improvements in vehicle
routing with TW constraints that can be achieved
by a reallocation of unfavorable TWs using a
one-sided Vickrey-Clarke-Groves mechanism. A
branch-and-cut algorithm is used to evaluate the
value of a TW in the context of a pickup and
delivery problem with time windows and to
generate a bid for the auction. A winner
determination problem is solved for conducting
the auction. We show that a reallocation of
unfavorable TWs leads to distance savings for
the considered tours of the auction winners of
13% on average. Further, we can show that the
TWMS provider can benefit by operating the
mechanism on an electronic marketplace.
pdf
Crossstacks: A Dataset and a Simulative Study of
Storage Allocation Strategies for Cross-Docking
Block-Stacking Warehouses
Alexandru Rinciog (TU Dortmund University), Natalia
Ogorelysheva (Fraunhofer IML), Jakob Pfrommer (TU
Dortmund University), Anna Vasileva (Fraunhofer IML),
and Hardik Rathod and Anne Meyer (TU Dortmund
University)
Abstract
Abstract
Cross-docking is a warehousing strategy that
(ideally) moves goods from inbound docks
directly to outbound docks. In reality, goods
often need to be temporarily stored.
Cross-docking is typically set up as a
block-stacking warehouse (BSW), where goods are
stored directly on the ground. Autonomous mobile
robots (AMRs) could significantly reduce BSW
costs. To deploy AMR systems to BSWs, five
interlaced decision problems, including the
storage location assignment problem (SLAP), need
to be solved. Because of the combinatorial
complexity of BSWs, and the absence of pertinent
use case data and fitting simulation software,
this is a challenging task. This work seeks to
alleviate these gaps by (1) extending SLAPStack,
a fine-grained open-source BSW simulation
framework to accommodate cross-docking, (2)
providing CROSSStacks, a real-world
cross-docking dataset, and (3) evaluating two
dual command cycle SLAP strategies as of yet
untested for BSWs. One of the approaches
outperforms a naive cross-docking SLAP strategy.
pdf
Technical Session · Logistics Supply Chains Transportation
Simulation with Reinforcement Learning
Chair: Steffen Strassburger (Technische Universität
Ilmenau)
Multi-Agent Proximal Policy Optimization for a
Deadlock Capable Transport System in a
Simulation-Based Learning Environment
Marcel Müller (Otto von Guericke University
Magdeburg); Lorena Silvana Reyes Rubiano (RWTH Aachen
University, Universidad de La Sabana); and Tobias
Reggelin and Hartmut Zadek (Otto von Guericke University
Magdeburg)
Abstract
Abstract
In this paper, we explore the potential of
multi-agent reinforcement learning (MARL) for
managing the driving behavior of autonomous
guided vehicles (AGVs) in production logistics
environments with single-lane tracks, where
deadlocks pose a significant challenge. We build
upon previous work and adopt a MARL approach
using the Proximal Policy Optimization (PPO)
algorithm. We conduct a thorough hyperparameter
search and investigate the impact of varying
numbers of agents on the performance of the
AGVs. Our results demonstrate the effectiveness
of the MARL approach in addressing deadlocks and
coordinating AGV behavior, as well as the
scalability of the learned policy to different
numbers of agents. The Bayesian optimization
process and increased iteration count contribute
to improved performance and more stable learning
curves.
pdf
Simulation Analysis of a Reinforcement-Learning-Based
Warehouse Dispatching Method Considering Due Date and
Travel Distance
Sriparvathi Shaji Bhattathiri, Ankita Tondwalkar,
Michael E. Kuhl, and Andres Kwasinski (Rochester
Institute of Technology)
Abstract
Abstract
As the adoption of autonomous mobile robots in
warehouses and other industrial environments
continues to increase, there is a need for
methods that can effectively dispatch robots to
meet system demand. Real-time dispatching of
autonomous mobile robots can be very complex,
but simple rule-based methods are typically used
for this task. In this paper, a
reinforcement-learning-based dispatching method
for intralogistics (RLDI) is proposed. RLDI is
warehouse layout independent and takes into
consideration task due dates and the travel
distance. The algorithm is trained and tested in
a simulation environment that represents a small
warehouse. Monte Carlo simulation analysis is
used to explore the capabilities and limitations
of the established RLDI. The performance of the
method is compared to the shortest distance
dispatching rule in single and multi-agent
environments under various levels of due date
tightness. Experimental results demonstrate the
potential for using reinforcement learning
methods for warehouse dispatching.
pdf
Purpose in the Machine: Do Traffic Simulators Produce
Distributionally Equivalent Outcomes for Reinforcement
Learning Applications?
Rex Chen, Kathleen M. Carley, Fei Fang, and Norman Sadeh
(Carnegie Mellon University)
Abstract
Abstract
Traffic simulators are used to generate data for
learning in intelligent transportation systems
(ITSs). A key question is to what extent their
modelling assumptions affect the capabilities of
ITSs to adapt to various scenarios when deployed
in the real world. This work focuses on two
simulators commonly used to train reinforcement
learning (RL) agents for traffic applications,
CityFlow and SUMO. A controlled virtual
experiment varying driver behavior and
simulation scale finds evidence against
distributional equivalence in RL-relevant
measures from these simulators, with the root
mean squared error and KL divergence being
significantly greater than 0 for all assessed
measures. While granular real-world validation
generally remains infeasible, these findings
suggest that traffic simulators are not a deus
ex machina for RL training: understanding the
impacts of inter-simulator differences is
necessary to train and deploy RL-based ITSs.
pdf
Technical Session · Logistics Supply Chains Transportation
Simulation-Optimization with Uncertainty
Chair: Javier Faulin (Public University of Navarre,
Institute of Smart Cities)
Solving the Multi-Allocation p-Hub Median Problem
with Stochastic Travel Times: A Simheuristic
Approach
Niklas Jost (TU Dortmund), Majsa Ammouriova (Universitat
Oberta de Catalunya), Aleksandra Grochala (TU Dortmund),
Angel Juan (Universitat Polit`ecnica de Val`encia), and
Christin Schumacher (TU Dortmund)
Abstract
Abstract
The p-hub median problems (pHMPs) are a
well-researched topic within the fields of
Operations Research and Industrial Engineering.
These problems have been found to have a wide
range of practical applications in various areas
such as logistics, retailing, and Internet
computing. These applications have made pHMPs an
important area of study, leading to numerous
research efforts aimed at solving different
variations of the problem. This paper presents a
simheuristic algorithm for solving the
uncapacitated version of the pHMP with
stochastic travel times. The proposed approach
combines simulation with biased-randomized
heuristics to generate high-quality solutions
quickly. The proposed method is validated by
testing it on huge benchmark instances, which
include stochastic travel times. The results
demonstrate the efficiency of the proposed
approach for this particular problem variation.
The simulation-optimization approach provides a
promising solution to a practical problem that
arises in many real-world applications.
pdf
Simulation-based Analysis of Onshore Wind Farm
Installation Strategies
Daniel Rippel, Sebastian Eberlein, Stephan Oelker, and
Michael Lütjen (BIBA - Bremer Institut für
Produktion und Logistik GmbH at the University of
Bremen) and Michael Freitag (BIBA - Bremer Institut
für Produktion und Logistik GmbH at the University
of Bremen, University of Bremen)
Abstract
Abstract
Wind energy constitutes a main contributor to
clean and renewable energy. While the offshore
sector received much attention from research and
industry, onshore wind farms still make up the
largest share of installation projects. Thereby,
onshore installations retain similar wind speed
restrictions as their offshore counterparts but
additionally introduce limits and wait time
restrictions between installation operations.
This article proposes extending a planning
method initially designed for offshore wind
farms to cover these additional requirements and
proposes a simulation model capable of
evaluating the resulting plans. The results show
that the extended approach prevents violations
of these requirements, mitigates the influence
of weather forecast uncertainties, and provides
efficient plans for installation operations.
pdf
A Two-Stage Stochastic Model for Drone Delivery
System with Uncertainty in Customer Demands
Xudong Wang, Gerald Jones, and Xueping Li (University of
Tennessee, Knoxville)
Abstract
Abstract
Drone delivery is a popular logistics method for
e-commerce businesses due to its efficiency and
convenience, especially for last-mile delivery
and emergency situations in areas with poor
infrastructure. However, the uncertainty of
customer demands can affect transportation costs
in the long run, making it vital to design an
effective delivery system. To tackle this issue,
we propose a two-stage stochastic model that
minimizes the sum of fixed and expected
operating costs. The first stage minimizes the
total cost of the delivery system, including the
facilities fixed costs and expected operating
costs, while the second stage arranges drones'
routes according to simulated demands to
estimate the minimal expected transportation
cost and penalty cost. Since this stochastic
programming has infinite scenarios, we deploy a
sample average approximation method to estimate
its bounds. Additionally, we use a heuristic
simulation framework to find a satisfactory
solution in an acceptable time.
pdf
Manufacturing and Industry 4.0
Track Coordinator - Manufacturing and Industry 4.0: Alp Akcay (Eindhoven University of Technology), Christoph
Laroque (University of Applied Sciences Zwickau), Guodong
Shao (National Institute of Standards and Technology)
Technical Session · Manufacturing and Industry 4.0
Panel: Maintenance and Operations of Manufacturing Digital
Twins
Chair: Alp Akcay (Eindhoven University of Technology)
Maintenance and Operations of Manufacturing Digital
Twins
Alp Akcay (Eindhoven University of Technology), Stephan
Biller (Purdue University), Boon Ping Gan (D-SIMLAB
Technologies Pte Ltd), Christoph Laroque (University of
Applied Sciences Zwickau), and Guodong Shao (National
Institute of Standards and Technology)
Abstract
Abstract
Digital twins have become an important element
in smart manufacturing. As any other product,
digital twins also have a lifecycle, starting
from specifying the requirements of the digital
twins until their decommissioning. As part of
the Manufacturing and Industry 4.0 track of the
Winter Simulation Conference (WSC), the purpose
of this panel is to discuss the state of the art
in digital twins with a special emphasis on the
operations and maintenance of manufacturing
digital twins during their lifecycles. The
panelists come from academia, industry, and
government with experience in the digital-twin
landscape of the manufacturing industry in the
United States, Europe, and Asia. This paper
provides a collection of the statements from
each panelist with the objective of initiating a
deeper discussion during the panel session and
inspiring researchers in the simulation
community with their perspectives on the use of
digital twins for smart manufacturing.
pdf
Technical Session · Manufacturing and Industry 4.0
Biomanufacturing and Process Industry
Chair: Daniel Seufferth (Universität der Bundeswehr
München)
Stochastic Molecular Reaction Queueing Network
Modeling for In Vitro Transcription Process
Keqi Wang, Wei Xie, and Hua Zheng (Northeastern
University)
Abstract
Abstract
To facilitate a rapid response to pandemic
threats, this paper focuses on developing a
mechanistic simulation model for in vitro
transcription (IVT) process, a crucial step in
mRNA vaccine manufacturing. To enhance
production and support industry 4.0, this model
is proposed to improve the prediction and
analysis of IVT enzymatic reaction network. It
incorporates a novel stochastic molecular
reaction queueing network with a regulatory
kinetic model characterizing the effect of
bioprocess state variables on reaction rates.
The empirical study demonstrates that the
proposed model has a promising performance under
different production conditions and it could
offer potential improvements in mRNA product
quality and yield.
pdf
Rolling-Horizon Simulation Optimization for a
Multi-Objective Biomanufacturing Scheduling
Problem
Kim van den Houten, Mathijs de Weerdt, and David Tax
(Delft University of Technology); Esteban Freydell
(DSM); and Eva Christopoulou and Alessandro Nati
(Systems Navigator)
Abstract
Abstract
We study a highly complex scheduling problem
that requires the generation and optimization of
production schedules for a multi-product
biomanufacturing system with continuous and
batch processes. There are two main objectives
here; makespan and lateness, which are combined
into a cost function that is a weighted sum. An
additional complexity comes from long horizons
considered (up to a full year), yielding problem
instances with more than 200 jobs, each
consisting of multiple tasks that must be
executed in the factory. We investigate whether
a rolling-horizon principle is more efficient
than a global strategy. We evaluate how cost
function weights for makespan and lateness
should be set in a rolling-horizon approach
where deadlines are used for subproblem
definition. We show that the rolling-horizon
strategy outperforms a global search, evaluated
on problem instances of a real biomanufacturing
system, and we show that this result generalizes
to problem instances of a synthetic factory.
pdf
From Simulation To Real-Time Digital Twin and AI -
Implementation in a Food Manufacturing Plant
Hosni Adra (CreateASoft, Inc)
Abstract
Abstract
Data-Driven simulation models are valuable tools
to improve the accuracy of the models and enable
them to transition to real-time predictive
analytics tools. Adding AI (Artificial
Intelligence) and ML (Machine Learning) enables
those model to provide feedback and real-time
optimization in un-attended environment. This
paper details the steps and benefits that were
used to implement such system in a large filling
and packaging manufacturing setting, from
initial randomized models to full real-time
digital twin systems. Final models were used to
optimize (real-time and offline) changeover, CIP
(Clean in Place), production, filling lines, and
material handling.
pdf
Technical Session · Manufacturing and Industry 4.0
Deep Reinforcement Learning Applications
Chair: Alp Akcay (Eindhoven University of Technology)
Semiconductor Fab Scheduling with Self-Supervised and
Reinforcement Learning
Best Contributed Applied Paper - Finalist
Pierre Tassel and Benjamin Kovács
(Alpen-Adria-Universität Klagenfurt); Martin Gebser
(Alpen-Adria-Universität Klagenfurt, Graz
University of Technology); Konstantin Schekotihin
(Alpen-Adria-Universität Klagenfurt); and Patrick
Stöckermann and Georg Seidel (Infineon Technologies
AG)
Abstract
Abstract
Semiconductor manufacturing is a complex, costly
process involving a long sequence of operations
on limited, expensive equipment. Recent chip
shortages and their impacts have highlighted the
importance of semiconductors in the global
supply chains and how reliant on those our daily
lives are. Due to the investment cost,
environmental impact, and time scale needed to
build new factories, it is difficult to ramp up
production when demand spikes. This work
introduces a method to successfully learn to
schedule a semiconductor manufacturing facility
more efficiently using deep reinforcement and
self-supervised learning. We propose the first
adaptive scheduling approach to handle complex,
continuous, stochastic, dynamic, modern
semiconductor manufacturing models. Our method
outperforms the traditional hierarchical
dispatching strategies typically used in
semiconductor manufacturing plants,
substantially reducing each order’s
tardiness and time until completion.
Consequently, our method yields a better
allocation of resources in the semiconductor
manufacturing process.
pdf
Deep Reinforcement Learning with Discrete-event
Simulation for Steel Plate Stacking Problem
SaeNal Sung and SookYoung Son (HD Korea Shipbuilding &
Offshore Engineering); Young-in Cho, Hee-chang Yoon, and
Jong Hun Woo (Seoul National University); and Jong-Ho
Nam (Korea Maritime and Ocean University)
Abstract
Abstract
In shipyards, newly supplied steel plates from
steel-making companies are stored in steel
stockyards until they are retrieved according to
the pre-determined cutting schedule. Steel
plates are grouped into lots, and all steel
plates of the identical lot are retrieved and
transported into the cutting workshop at the
same time. In this study, we developed the
two-stage stacking algorithm to minimize the
workload of overhead cranes for the rehandling
work in the retrieval process. In the proposed
algorithm, a reinforcement learning-based agent
which learns the stacking policy in the
simulation environment determines the initial
stacking location of the steel plates only
considering the cutting schedule. After the
initial arrangement of steel plates is created,
steel plates are reshuffled using the simulated
annealing considering both the cutting schedule
and lot information.
pdf
Digital Twins and Deep Reinforcement Learning for
Online Optimization of Scheduling Problems
Bulent Soykan and Ghaith Rabadi (University of Central
Florida)
Abstract
Abstract
This paper presents an approach that combines
data-driven digital twins (DTs) and deep
reinforcement learning (DRL) to address the
challenges of online optimization of scheduling
problems, focusing specifically on the classic
job shop scheduling problem. Traditional
approaches to solving such problems often
encounter limitations in handling uncertainties
and dynamic environments. In this study, we
explore the integration of DTs and DRL to
enhance decision-making in scheduling problems.
We investigate the adaptability of a Graph
Neural Network model within the DRL framework,
enabling the agent to learn optimal scheduling
policies through interactions with the DT. The
potential of this convergence to tackle modern
scheduling complexities offers insights into the
future of operations management.
pdf
Technical Session · Manufacturing and Industry 4.0
Manufacturing Operations
Chair: Klaus Altendorfer (Upper Austrian University of
Applied Science)
Modeling and Simulation for the Operative Service
Delivery Planning in the Context of Product-Service
Systems
Enes Alp (Ruhr-Universität Bochum); Michael Herzog
(Centre for the Engineering of Smart Product-Service
Systems (ZESS)); Furkan Ercan (Ruhr-Universität
Bochum); and Bernd Kuhlenkötter
(Ruhr-Universität Bochum, Centre for the
Engineering of Smart Product-Service Systems (ZESS))
Abstract
Abstract
Accelerated with the developments in the context
of Industry 4.0, a new trend has established
itself in the manufacturing industry within the
last two decades. Companies started to offer
integrated solutions such as Product-Service
Systems (PSS). While the provision of PSS
enables benefits like business model innovation
or strengthening competitiveness, the
exploitation of these benefits depends heavily
on the decisions in the operative service
delivery planning. This, however, is a complex
task due to the huge solution space. Analytical
methods reach their limitations when trying to
find the optimal solution. Though different
optimization algorithms were elaborated for this
problem, the evaluation of their solutions is
overly simplified, and thus, their
expressiveness for the uncertain and dynamic
reality remains questionable. This paper
addresses these issues by demonstrating the
modeling of an adaptive simulation model that
can be used to gain a realistic evaluation of
operative service delivery plans in PSS.
pdf
Simulation-Based Energy Reduction for a Lead-Acid
Battery Production with Stochastic Maturation and
Drying Processes
Balwin Bokor and Klaus Altendorfer (University of
Applied Sciences Upper Austria)
Abstract
Abstract
The reduction of carbon dioxide emissions is a
major goal of the European Union and energy
storage is a core aspect to reach this goal.
However, the production of lead-acid batteries
is very energy consuming. Based on a case
company production system and data, we develop a
simulation model for the most energy-intensive
lead-acid battery production steps, i.e.,
ripening and drying of lead plates. As both
processes have some non-controllable stochastic
aspects, the planned process times for both
steps are a crucial factor for overall energy
consumption. Too low or too high planned process
times either lead to energy wasting for
re-warm-up or to unnecessary energy consumption
during processing. Simulation results reveal a
significant energy reduction potential when
optimizing planned process times, which
increases when process uncertainty decreases. In
addition, also the post-maturation and
post-drying times are found to have a high
influence on overall energy consumption.
pdf
LNG CCS (Cargo Containment System) Manufacturing
System using IoT Data and Schedule Simulation
Yonghee Kim and Eunsun Jeong (HDKSOE)
Abstract
Abstract
Compared to other manufacturing industries, the
shipbuilding industry has high uncertainties and
volatility in resources such as manpower, space,
and equipment. The labor-intensive, expansive
yard spaces, and enclosed working areas of the
shipbuilding industry make it difficult to
aggregate and analyze data. The research effort
presented in this extended abstract focuses on
gathering production data using IoT technology
and schedule simulation for the intent of
reduction in uncertainty of project management.
The gathered data from automated equipment can
be employed to monitor production performance
and conduct data-driven production management.
It is possible to prevent from decreasing
production performance and excluding input of
batch production performance unrelated to actual
work information. In addition, we use simulation
to find the optimal solution for the purpose of
load leveling in the process of establishing an
LNG CCS manufacturing plan.
pdf
Technical Session · Manufacturing and Industry 4.0
Manufacturing Intralogistics
Chair: Nitish Singh (Eindhoven University of
Technology)
Simulation-Based AGV Management with a Linear
Dispatching Rule
Nitish Singh, Jeroen B.H.C. Didden, Alp Akcay, Tugce
Martagan, and Ivo J.B.F. Adan (Eindhoven University of
Technology)
Abstract
Abstract
This paper considers the problem of real-time
dispatching of a fleet of heterogeneous
automated guided vehicles (AGVs) with battery
constraints. The AGV fleet is heterogeneous in
terms of material handling capabilities; some
can tow loads, some can lift loads while others
manipulate loads with the assistance of a
robotic arm. Transport requests arrive in
real-time and include a soft time window, with
late delivery incurring tardiness costs.
Transport requests need to be assigned to a
capable AGV based on required material handling
capabilities with the objective to minimize a
weighted sum of tardiness costs of transport
requests and travel costs of AGVs. In this
paper, an AGV-specific linear dispatching rule
(LDR) learning approach is proposed to assign
AGVs to randomly arriving transport requests in
real time over a finite horizon. The proposed
approach is compared with a heuristic policy
from practice by using real-world data provided
by our industry partner.
pdf
Analysis of Autonomous Mobile Robots in Warehousing
Using a Digital Twin Simulation
Michael Sellen (CreateASoft, Inc)
Abstract
Abstract
The continued acceleration of e-commerce growth
present a challenge for fulfillment centers to
manage growing SKU counts and increased demand
volatility while continuing to satisfy customer
delivery expectations and maintain control over
costs. Many fulfillment centers are turning to
automated solutions such as Autonomous Mobile
Robots in an effort to increase throughput and
efficiency from existing facilities. AMRs move
throughout the warehouse environment
guidance-free and can be deployed bringing goods
to person, bulk material movement and can work
collaboratively with employees for picking
applications. For warehouse operations
management teams and AMR solution providers,
identifying the optimum fleet size and
deployment logic for current and projected
demand is a crucial step in a successful
adoption of this technology. Data-Driven
modelling and simulation can be a useful asset
when evaluating different solutions and
requirements before installation as well as
identifying opportunities for increased
efficiency or expansion in existing operations.
pdf
Sequential Decision-Making Framework for Robotic
Mobile Fulfillment System-Based Automated Kitting
System
Jaeung Lee, Sungwook Jang, and Young Jae Jang (Korea
Advanced Institute of Science and Technology) and Yooeui
Jin, Il Kyu Lim, Seungmin Jeong, and Eoksu Sim (Global
Technology Research Samsung Electronics)
Abstract
Abstract
In a flexible production line capable of
producing various product types within a single
assembly line, an efficient parts supply is
critical. The kitting feeding policy,
implemented in the flexible production line,
aims to kit and supply the necessary parts to
the production line without delay. This study
investigates the kitting feeding operation for
Samsung Electronics’ surface-mount device
production line. To facilitate the timely supply
of parts required for surface-mount device
production, Samsung Electronics introduced a
robotic mobile fulfillment system-based
automated kitting system. This research proposes
a sequential decision-making framework to
address the kitting operation optimization
problem, as well as a kitting scheduling
algorithm within the proposed framework. A
simulation environment has been implemented to
verify the performance of the proposed framework
and algorithm through a series of experiments.
The experimental results indicate that the
proposed framework enhances operational
performance and maintains stability, even as the
problem size expands.
pdf
Technical Session · Manufacturing and Industry 4.0
Case Studies in Manufacturing I
Chair: David T. Sturrock (Simio LLC)
Simulation of SKU Slotting in Lift Truck
Manufacturing Facility Warehouse: Raymond Corporation,
Iowa
Jay Amer (University of Tennessee, Knoxville; N. J.
Malin); Xueping Li (University of Tennessee, Knoxville);
and Michael Bambino (N. J. Malin)
Abstract
Abstract
This objective of this simulation was to
estimate the impact of optimizing parts slotting
on picking throughput within the existing
Raymond Corporation lift truck manufacturing
facility warehouse in Iowa. The simulation
demonstrated that slotting can results in a
67.89% increase in picking throughput. This
increase exceeded production requirements and
eliminated the need to outsource picking.
pdf
Simulating the Material Delivery Process for an
Automotive Body Shop
Joseph Hugan (TriMech, LLC)
Abstract
Abstract
Increasing product customization and a continual
need for higher productivity has led to more
complex automotive vehicles being built in more
compressed spaces. The material delivery
networks supporting these processes have also
had to adapt to deliver a wider variety of parts
in smaller packaging at an increasing frequency.
The author will discuss the development and
analysis of an automotive delivery network
simulation with a focus on delivery times, the
resources required, the data model used to drive
the simulation and the analytical techniques
used during the project. The presentation will
also include a discussion on the model
construction, the time required to construct the
model, and the challenges encountered in the
project.
pdf
An Integrated System of Scheduling and Digital Twins
for Ore Transportation Inside-Outside Steelworks
Shun Yamamoto and Akira Kumano (JFE Steel Corporation)
Abstract
Abstract
JFE Steel Corporation has developed an ore
logistics optimizer to reduce transportation
costs. Because the Japanese steel industry
imports large quantities of raw materials, the
huge cost of ship freight and demurrage fees has
become a problem. This work presents the ore
carrier scheduler which was developed using
metaheuristics methods to minimize logistics
costs. A strategy of consolidating various iron
ore brands at a junction spot that super-large
carriers can enter is suggested. A digital twin
that represents the stockyard in the steelworks
is developed using a discrete simulator to
verify the feasibility of operations, confirming
the possibility of reducing costs by more than
10 % by utilizing this system.
pdf
Technical Session · Manufacturing and Industry 4.0
Predictive Maintenance
Chair: Christoph Laroque (University of Applied Sciences
Zwickau)
Simulation-Based Evaluation of Imperfect Predictive
Maintenance Models in Discrete Manufacturing: A
Procedure Model and Case Study
Clemens Gutschi, Nikolaus Furian, and Siegfried Voessner
(Graz University of Technology)
Abstract
Abstract
The performance and reliability of production
systems is greatly affected by sudden
breakdowns. In order to avoid these unforeseen
interruptions, predictive maintenance (PdM)
systems are being widely used to predict
failures and prevent outages by maintenance. The
performance of PdM systems however depend
heavily on precision and recall of prediction
results. In the worst case, missing or false
alarms can actually worsen the performance of an
production system instead of improving it. We
present a new procedural model which
specifically focus on the imperfection of such
PdM systems and estimate the impact of this
unwanted property on the performance and
economic aspects of a production system. The
model is presented in all steps needed for
implementation and evaluation and demonstrated
in a realistic use case examining an interlinked
production system with a simulation-based
approach.
pdf
Data-Driven Smart Maintenance Decision Analysis: A
Drone Factory Demonstrator Combining Digital Twins and
Adapted AHP
Paulo Victor Lopes (Aeronautics Institute of Technology)
and Siyuan Chen, Juan Pablo González Sánchez,
Ebru Turanoglu Bekar, Jon Bokrantz, and Anders Skoogh
(Chalmers University of Technology)
Abstract
Abstract
The concept of Digital Twins has gained
significant attention in recent years due to its
potential for improving the performance of
production systems. One promising area for
Digital Twins is Smart Maintenance, enabling the
simulation of different strategies without
disrupting operations in the real system. This
study proposes a high-level framework to
integrate Digital Twins to support Smart
Maintenance data-driven decision making in
production lines. We implement, then, a case
study of a lab scale drone factory to
demonstrate how the production line performance
evaluation is made under different what-if
maintenance scenarios. The effects of this Smart
Maintenance decision analysis approach were
evaluated according to Key Performance
Indicators from literature. The identified
contributions are: (i) Digital Twin demonstrator
focused on smart maintenance; (ii)
implementation of smart maintenance data-driven
decision analysis concepts; (iii) design and
evaluation of what-if maintenance scenarios.
pdf
Understanding Stakeholder Requirements for Digital
Twins in Manufacturing Maintenance
Siyuan Chen (Chalmers University of Technology); Paulo
Victor Lopes (Aeronautics Institute of Technology,
Federal University of Sao Paulo); and Juan Pablo
González Sánchez, Ebru Turanoglu Bekar, Jon
Bokrantz, and Anders Skoogh (Chalmers University of
Technology)
Abstract
Abstract
Digital twin has emerged as a key technology in
the era of smart manufacturing and holds
significant potential for maintenance. However,
gaps remain in understanding stakeholders'
requirements and how this technology support
maintenance-related decisions. This paper aims
to identify stakeholders' requirements for
digital twin implementation and examine the role
of digital twin in supporting maintenance
actions and decision-making process.
Semi-structured interviews and a workshop
involving manufacturing practitioners and
researchers were conducted to attain these
goals. Furthermore, an in-depth qualitative
analysis of the interview data was carried out.
The results shed light on the current state of
digital twin adoption, implementation
challenges, requirements, supported decisions
and actions, and future demand characteristics.
By integrating the findings from the literature
review and interview analysis, this study
outlines the requirements for the digital twins
as expressed by industry stakeholders that will
be used and tested in the drone factory digital
twin model.
pdf
Technical Session · Manufacturing and Industry 4.0
Assembly Lines
Chair: Deogratias Kibira (National Institute of Standards
and Technology, University of Maryland)
A Simulation-Based Approach for Line Balancing under
Demand Uncertainty in Production Environment
S. M. Atikur Rahman and Md Fashiar Rahman (The
University of Texas at El Paso), Tamanna Kamal (NC State
University), and Tzu-Liang (Bill) Tseng (The University
of Texas at El Paso)
Abstract
Abstract
The management of production line is a
challenging task due to the high level of
uncertainty in demand, which can lead to
unbalanced utilization of resources. This may
result in a potential deterioration of
management satisfaction in terms of
cost-effectiveness. Therefore, it requires
efficient tools to optimize resource
utilization. With such inherent needs, this
paper presents a simulation-based decision
support framework for garments industries. The
Discrete Event Simulation (DES) is used to model
different scenarios for the operational
processes. The procedure focuses on the line
balancing technique, which aims to eliminate
bottlenecks and optimize the production process
by balancing the workload. The results of this
study demonstrate the effectiveness of the line
balancing technique in improving line
efficiency, reducing the idle time of the
operators, and increasing productivity. The
simulation was developed using AnyLogic
simulation software. The outcome of the process
is thoroughly evaluated and justified using a
case study.
pdf
Optimization of Flat Block Assembly Line Using
Constraint Programming and Discrete-Event
Simulation
Dong Hoon Kwak and Jong Hun Woo (Seoul National
University); Ki Young Cho (Seoul National University,
Department of Naval Architecture and Ocean Engineering);
and Hee Chang Yoon (Seoul National University)
Abstract
Abstract
Scheduling of flat block assembly in a shipyard
is crucial for productivity performance due to
the high level of workload. This problem is
commonly known as the permutation flowshop
scheduling problem (PFSP) in operation research,
which has been extensively studied in various
papers since the 1950s. However, existing
solutions often involve simplifying real-world
problems with certain assumptions, limiting
their practical applicability. In recent times,
constraint programming (CP) has emerged as a
strong alternative to exact algorithms and has
been successfully applied to various PFSP,
addressing the limitations of exact algorithms.
In light of this, our study proposes a two-step
optimization process to overcome the existing
limitations composed of a CP and discrete-event
simulation(DES).
pdf
Digital Twin Architecture for a Flow Shop Assembly
System
Gihan Lee and Seunghwan Chang (Ajou University), Onyu Yu
and Jungik Yoon (LG Production and Research Institute),
and Sangchul Park (Ajou University)
Abstract
Abstract
This paper proposes a digital twin architecture
for a flow shop assembly line to maximize
productivity and reduce quality costs. The
proposed digital twin architecture consists of
five major modules; Synchronization module to
synchronize a real factory and the digital twin,
Monitoring module to provide intuitive
information visualization, Event calendar
initialization module to initialize the factory
state at any given time to the starting point of
the CPS (Cyber-Physical System) simulation, CPS
simulation module to identify potential
production losses, and Decision-making module to
take proactive actions to avoid anticipated
production losses. The proposed digital twin
architecture has been implemented for a home
appliance factory of LG Electronics Co., Ltd. In
South Korea, and shows significant improvements
in terms of productivity, quality cost, and
energy efficiency.
pdf
Technical Session · Manufacturing and Industry 4.0
Simulation Approaches
Chair: Guodong Shao (National Institute of Standards and
Technology)
Reverse Engineering the Future – An Automated
Backward Simulation Approach to On-Time Production in
the Semiconductor Industry
Madlene Leißau and Christoph Laroque (University of
Applied Sciences Zwickau)
Abstract
Abstract
Researchers are investigating innovative
techniques and tools to improve operational
production planning, as manufacturing processes
are increasingly influenced by new product
demands, innovation, and cost-effectiveness.
Backward-oriented discrete event simulation
(SimBack) is one such tool that has shown great
promise in this area. However, conducting
multiple simulation runs for backward simulation
can be time and resource-intensive, hampering
its efficiency. To address this issue, this
paper proposes an automated approach for
executing and evaluating simulation experiments
within the framework of backward-oriented
discrete event simulation for scheduling and
capacity planning. The authors illustrate their
approach by applying it to a simulation model of
the Semiconductor Manufacturing Testbed 2020
(SMT2020).
pdf
Using Kubernetes to Improve Data Farming
Capabilities
Falk Stefan Pappert, Daniel Seufferth, Heiderose Stein,
and Oliver Rose (University of the Bundeswehr Munich)
Abstract
Abstract
Simulation can reach computational limits,
especially when running large-scale experiments.
One possibility to counter this issue is
distributed simulation. Recent developments in
containerization and container orchestration
technologies, such as Kubernetes, provide a
stable and scalable infrastructure, that can
serve distributed simulation. Although these
solutions exist, applications within the
simulation community remain scarce. Thus, in
this paper, we present the general setup of such
an infrastructure and discuss the application of
an example case. Adding to the existing
literature, we present our path forward and
insights with different versions, as well as the
efforts needed to construct similar
implementations. As a result, we showcase the
speed-up of simulation experimentation. We aim
to provide a helpful foundation for others in
our community to weigh the effort and benefit of
such a system for their own projects.
pdf
Optimizing Production System Configurations across a
Broad Design Space: A Case Study
Scott Nill and Larissa Nietner (LineLab, MIT)
Abstract
Abstract
This paper presents a case study demonstrating
the application of LineLab, a mathematical
production system modeling tool, to optimize
production system configurations and the ramp-up
trajectory for novel mass timber building
modules. The modeling tool can efficiently
co-optimize a large number of variables, such as
machine count, work-in-progress (WIP) count,
average wait times, and throughput, thus helping
to narrow down a broad design space. Sidewalk
Labs, a Google company, faced unique challenges
related to new product development, high-mix
production, and phased ramp-up. This case study
highlights the use of this mathematical
optimization tool, and its integration with
other simulation methodologies, resulting in an
optimized digital pipeline for modeling the
production scale-up for mass timber buildings.
The insights provided contribute to the
advancement of production optimization
techniques and their applications across various
industries.
pdf
Technical Session · Manufacturing and Industry 4.0
Manufacturing and Supply Chains
Chair: Thomas Felberbauer (St. Pölten University of
Applied Sciences)
Modeling Risk Prioritization of a Manufacturing
Supply Chain using Discrete Event Simulation
Arpita Chari and Silvan Marti (Chalmers University of
Technology); Paulo Victor Lopes (Aeronautics Institute
of Technology (ITA), Chalmers University of Technology);
and Björn Johansson, Mélanie Despeisse, and
Johan Stahre (Chalmers University of Technology)
Abstract
Abstract
Supply chains face a myriad of adverse risks
that impact their daily operations and make them
vulnerable. In addition, supply chains continue
to grow in size and complexity which further
sophisticates the problem. Lack of a structured
approach and limitations in existing risk
management methods contribute towards effective
mitigation strategies not being properly
developed. In this paper, we develop a discrete
event simulation modelling approach to quantify
the performance and risk assessment of a
manufacturing supply chain in Sweden which is
under the impact of risks. This approach could
support decision makers by prioritizing risks
according to their performance impact and
facilitating the development of mitigation
strategies to enhance the resilience of the
supply chain. The conceptual digital model can
also be used to generate synthetic data to build
an artificial intelligence-enhanced predictive
demonstrator model to showcase capabilities for
building data-driven resilience of the supply
chain.
pdf
A Simulation-Based Approach for Evaluating Different
Model Mixes for Production Planning of a Contract
Manufacturer in the Automotive Industry
Simon Gruber, Clemens Gutschi, Nikolaus Furian, and
Siegfried Vössner (Graz University of Technology,
Institute of Engineering- and Business Informatics)
Abstract
Abstract
Contract manufacturers face challenges with
short-term orders, cost pressures, and diverse
customer requirements. Customer trends in the
automotive industry intensify these challenges
with reduced batch sizes and individual
customization. Traditional analytic planning
methods are insufficient for handling the
complexity of modern manufacturing processes.
Computational power alone cannot overcome this
obstacle, careful modeling of production
processes and resources is essential. Simulative
approaches have been developed to address
similar problems. In this use case, we aim to
adapt and implement these approaches for a
leading automotive contract manufacturer. A
comprehensive assessment will then verify the
adapted approach’s viability and
potential.
pdf
Digital Twins for Supply Chains: Main Functions,
Existing Applications, and Research
Opportunities
Giovanni Lugaresi (KU Leuven); Zied Jemai
(CentraleSupelec, Ecole Nationale d'Ingénieurs de
Tunis); and Evren Sahin (CentraleSupelec)
Abstract
Abstract
In recent times, manufacturing industries and
their related supply chains have faced growing
internal and external pressures. Due to the
complex nature of global supply chain networks
and the increased frequency of disruptive
events, there is a pressing need to implement
digital tools to support these industries.
Digital twins have gained significant interest
from industry and research communities due to
their ability to provide valuable services in
the short term. While there have been many
contributions on digital twin-based
methodologies for system design and production
planning and control, the use of digital twins
in supply chain management still needs to be
improved. This paper presents an overview of the
existing contributions on digital twins for
supply chains. Starting from a preliminary
literature review on the topic, relevant works
are selected and used to identify insights on
the current development level and future
research opportunities.
pdf
Technical Session · Manufacturing and Industry 4.0
Production Planning
Chair: Geert van Kollenburg (Eindhoven University of
Technology)
Investigating Production Yield Effect on Inventory
Control Through a Hybrid Simulation Approach
Marina Materikina, Atefeh Shoomal, Linh Ho Manh, and
Yuan Zhou (University of Texas Arlington)
Abstract
Abstract
Production Planning and Control (PPC) plays a
key role in stabilizing and improving
manufacturing processes under external and
internal uncertainties by providing transparency
in the whole system. This study focuses on PPC
with internal uncertainties such as losses of
work-in-process products during a contact lens
manufacturing process. Although such losses are
expected, the yield rates are uncertain and vary
at different production stages. A hybrid
agent-based simulation (ABS) and discrete-event
simulation (DES) approach was utilized to
resemble the underlying dynamics of the
manufacturing system with uncertain yield rates.
The results of the simulation experiments
demonstrated that a simple average yield
approach for production planning would cause
potential backlogs and extra holding costs for
the excess inventory. The proposed hybrid
simulation could be used to support the
decision-making process on a weekly basis to
help a production planning team make a schedule
that would improve efficiency and customer
satisfaction.
pdf
Stick to the Plan or Adjust Dynamically? Combining
Order Release and Overtime Planning for Varying Demand
and Process Uncertainty
Julian Fodor and Stefan Haeussler (University of
Innsbruck)
Abstract
Abstract
Within the area of manufacturing planning and
control there is a long ongoing debate on when
and if decisions should be integrated to a
centralized model or split to separate planning
levels. While a centralized monolithic model is
capable of solving separate decisions
simultaneously, a hierarchical approach offers
more degrees of freedom since a local planner
always has more accurate information. The focus
of this paper is on the design and mathematical
assumptions of optimization models for overtime
and order release decisions in order to cope
with different degree of demand and process
uncertainty. We execute the optimal decisions
within a simulation model of a multi-stage,
multi-product stylized flow shop. Our results
show that a fully centralized is outperformed by
a hierarchical design and that planning order
release quantities centrally in combination with
flexible overtime planning yields the lowest
costs for high process uncertainty on the shop
floor.
pdf
An MDP Model-Based Reinforcement Learning Approach
for the Nesting Problem: A Case Study in Ship
Design
SookYoung Son (Seoul National University, HD KSOE);
YounHyun Kim and KiSun Kim (HD KSOE); and JongHun Woo
(Seoul National University, Research Institute of Marine
Systems Engineering)
Abstract
Abstract
The nesting problem in the shipbuilding industry
calls for an increase in the utilization rates
of plates and a decrease in the scrap ratio. To
improve the efficiency of part nesting in ship
design, this paper proposes an approach that
uses a reinforcement learning algorithm to
determine an efficient arrangement of parts. We
frame the ship nesting problem as a Markov
Decision Process (MDP) to apply the Proximal
Policy Optimization (PPO) model, a reinforcement
learning algorithm. A case study on a real-life
nesting design is provided to validate and
compare the proposed approach.
pdf
Technical Session · Manufacturing and Industry 4.0
Case Studies in Manufacturing II
Chair: Molly Arthur (Simio)
A Logistics Simulation Model Repository to Accelerate
Simulation Modeling in the Aerospace Industry
Bjoern Goedecke (Airbus Operations), Philipp Braun
(Hamburg University of Technology), Tobias Kuhrt (Airbus
Aerostructures), Nadhir Mechai and Arne Anhalt
(Accenture Industry X), Klaus Fischer and Helge Fromm
(Airbus Operations), and Yannik Dreischhoff (Accenture
Industry X)
Abstract
Abstract
Airbus established a digitalization strategy to
enhance logistics and production processes using
model-based systems engineering, including
material flow simulation. To store and reuse
simulation model, holdup quality standards and
support logistics planning, novel to the
aerospace industry, a logistics simulation
repository is being developed. This is supported
by presenting ongoing simulation studies.
pdf
Specification, Simulation and Analysis of
Alternatives for On-line Scheduling of Independent
Jobs in Different Servers
Jaume Figueras Jové and Pau Fonseca Casas
(Universitat Politècnica de Catalunya)
Abstract
Abstract
Service companies have the challenge to analyze
a large number of documents in order to extract
relevant information for decision making. Such
analysis can be made automatically reducing
drastically the time amount and human effort
needed. However, the computer system must ensure
that the analysis of each document will be
completed within a specified period of time
which depends on the type of the document. A
real case study is presented in this paper where
the objective is to propose a new scheduling
model for a computer system with 6 servers with
a total of 384 logical cores. The arrival of
documents is aperiodic and the processing time
stochastic though processing time estimation can
be done based on the number of pages and the
type of the document. A simulation model has
been developed to analyze the quality of each
algorithm. A delay maximum time (DMT) algorithm
is also proposed.
pdf
Simulation-Based Analyses and Improvements of the
Smart Line Management System in Canned Beverage
Industry: A Case Study in Europe
Ahmad Attar, Yuqing Jin, Martino Luis, Shuya Zhong, and
Voicu Ion Sucala (University of Exeter)
Abstract
Abstract
Canned water is one of the thriving markets in
the food and beverage industry. Given the tight
competition in this market, realistic analysis
in such production lines has become even more
attractive for all participating parties. In
this paper, we apply a KPI-driven
simulation-based approach to a smart production
plant of a key player in the European beverage
market. The project covers realistic
discrete-event modeling and analysis of the
system together with the suggested
scenario-based optimization for performance
improvement. Here, the smart line management
system is modeled and re-coded while considering
machine characteristics, failures, and their
overall influence on the production process. Our
proposed optimized scenario demonstrates
noticeably better results in all performance
indicators when compared to the existing state
of the system. The total increment of the
production speed reaches up to 45 percent,
resource utilization is evenly optimal, and the
overall work-in-progress inventory is reduced
significantly.
pdf
Technical Session · Manufacturing and Industry 4.0
Assembly Lines II
Chair: Ali Ahmad Malik (Oakland University)
Integrating Scheduling of Logistic Support Processes
in Agent-Based Industry 4.0 Assembly Simulation
Adrian Freiter (Fraunhofer Institute for Software and
Systems Engineering ISST) and Christian Schwede
(University of Applied Sciences and Arts Bielefeld)
Abstract
Abstract
The upcoming decentralized production systems
seem to be promising in Industry 4.0 assembly to
handle the challenges of highly individual
products. Matrix production characterized by
freely linked workstations and an advanced
automation level are highly flexible. That is
why many efforts have already been made to
explore the advantages compared to existing flow
shop production systems, but also the additional
challenges arising from this new paradigm. One
of these challenges is the synchronization of
main product and supply part flow at the
individual workstations during order scheduling.
This paper presents a new approach of
integrating logistics support processes into the
scheduling of the main product flow to consider
the part supply in the decisions taken during
scheduling avoiding waiting times. We compare
our integrated approach with the existing
decoupled scheduling approach, based on a
“bicycle assembly” scenario. The
results are promising particularly when part
supply is a bottleneck.
pdf
MASM: Semiconductor Manufacturing
Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Young Jae Jang
(Korea Advanced Institute of Science and Technology, Daim
Research), Lars Moench (University of Hagen)
Technical Session · MASM: Semiconductor Manufacturing
Scheduling I
Chair: Reha Uzsoy (North Carolina State University)
A Reinforcement Learning Approach for Improved
Photolithography Schedules
Tao Zhang (Universität der Bundeswehr
München), Kamil Erkan Kabak (Izmir University of
Economics), Cathal Heavey (University of Limerick), and
Oliver Rose (Universität der Bundeswehr
München)
Abstract
Abstract
A Reinforcement Learning (RL) model is applied
for photolithography schedules with direct
consideration of reentrant visits. The
photolithography process is mainly regarded as a
bottleneck process in semiconductor
manufacturing, and improving its schedules would
result in better performances. Most RL-based
research do not consider revisits directly or
guarantee convergence. A simplified discrete
event simulation model of a fabrication facility
is built, and a tabular Q-learning agent is
embedded into the model to learn through
scheduling. The learning environment considers
states and actions consisting of information on
reentrant flows. The agent dynamically chooses
one rule from a pre-defined rule set to dispatch
lots. The set includes the earliest stage first,
the latest stage first, and 8 more composite
rules. Finally, the proposed RL approach is
compared with 7 single and 8 hybrid rules. The
method presents a validated approach in terms of
overall average cycle times.
pdf
Deploying an Advanced AI Diffusion Scheduler at a
Renesas Fab
James Adamson and Lio Weinstock (Flexciton Ltd), Jay
Maguire (Renesas), Lara Nichols (FabTime), and Dionysios
Xenos (Flexciton Ltd)
Abstract
Abstract
Scheduling the diffusion area in a front-end
wafer fab poses challenges. This industrial case
focuses on scheduling diffusion at
Renesas’ Palm Bay Fab, which is always
seeking scheduling system improvements.
Transitioning to an advanced system, considering
fab-wide impacts on diffusion batching, enhances
Key Performance Indicators (KPIs). Our A.I.
scheduler utilizes optimization, heuristics, and
live data updates every five minutes.
Collaborating with FabTime integrates the
scheduler with the fab’s MES, ensuring
frequent updates. It optimizes batching, tool
allocation, and launch times, aligning with
Renesas’ objective to balance competing
goals. Initial results show a 36% and 13%
increase in diffusion batch sizes at clean and
expensive furnace toolsets. The minor impact on
cycle time reflects the scheduler’s focus
on batching efficiency. This approach improves
efficiency and meets Renesas’ goals,
marking a positive step in optimizing their
wafer fab operations.
pdf
Deep Learning Enabling Digital Twin Applications in
Production Scheduling: Case of Flexible Job Shop
Manufacturing Environment
Amir Ghasemi (Amsterdam University of Applied Sciences,
Amsterdam School of International Business); Yavar
Taheri Yeganeh and Andrea Matta (Politecnico di Milano);
Kamil Erkan Kabak (Izmir University of Economics); and
Cathal Heavey (University of Limerick)
Abstract
Abstract
Digital twin-based Production Scheduling (DTPS)
is a process in which a digital model replicates
a manufacturing system, known as a
“Digital Twin (DT)”. DT is
essentially a virtual representation of physical
equipment and processes that are connected to
the physical environment using an online
data-sharing infrastructure within the
Manufacturing Execution System (MES). In the
case of reactive scheduling, DT is used to
detect fluctuations in the scheduling plan and
execute rescheduling plans. In proactive
scheduling, it is used to simulate different
production scenarios and optimize future states
of production operations. Replicating detailed
simulation models in most PS cases is highly
computationally intensive, which negates against
the main goal of DT (online decision making).
Thus, this research aims to examine the
possibility of using data-driven models within
the DT of a Flexible Job Shop (FJS) production
environment aiming to provide online estimations
of PS metrics enabling DT-based
reactive/proactive scheduling.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Time Issues in Wafer Fabs
Chair: Young Jae Jang (KAIST)
Optimization of Timelinks in Semiconductor
Manufacturing
Nina Dybowski, Maria Sander, and Ralf Sprenger (Infineon
Technologies Dresden GmbH)
Abstract
Abstract
Impact of timelinks to semiconductor
manufacturing has risen due to shrinking
technology sizes. Their operational control
defines on the one hand how good the time
restrictions are met and on the other the impact
to fab capacity. This paper discusses both
aspects and the influencing factors like uptime
stability, length of the timelink etc. A control
approach is proposed, evaluated, and discussed.
Furthermore, a monitoring system is introduced
that enables for fast decision making and
optimization of the control parameters. Finally,
a simulation study is done for evaluating
different parameters and impact of influencing
factors.
pdf
Queue Time Prediction Methodology in Semiconductor
Fab
Donguk Kim, Byeongseon Lee, and Sangchul Park (Ajou
University)
Abstract
Abstract
This paper presents a methodology for predicting
queue times in semiconductor fabrication, where
numerous complex and costly pieces of equipment
are utilized. Queue time, occurring between
continuous single or multi-processes, is a
crucial factor affecting the quality of wafers,
which can significantly impact costs. While most
semiconductor fabrications use queue time limits
as a key dispatching factor, some wafers may
still be scrapped or reworked. By predicting
queue times, we can reduce unnecessary waste by
blocking or re-dispatching wafers. Two
approximations are proposed and compared based
on accuracy and prediction time: a machine
learning model trained using experimental
results and a multi-resolution simulation model
with varying fidelity levels. The simulation
model is validated using the SMAT2022 data set.
pdf
Processing Time and Machine Availability Prediction
in Semiconductor Manufacturing Using Neural
Networks
Taki Eddine Korabi, Gerard Goossen, Abhinav Kaushik,
Tijmen Tieleman, Jasper Van Heugten, and Jeroen
Bédorf (Minds.ai) and Shiladitya Chakravorty,
Detlef Pabst, and John Thomas (Globalfoundries)
Abstract
Abstract
In partnership with GlobalFoundries we have
significantly advanced Processing Time (PT) and
machine availability prediction in fabrication
plants, utilizing an attention based neural
network. This model is integrated into an MLOps
pipeline consisting of data collection,
preprocessing, training and deployment. The data
is augmented with features such as chamber usage
and process sequences. Compared to the current
model, which calculates average processing times
over a predefined context, our approach has
reduced the Mean Absolute Error (MAE) of PT
predictions by 43% to 80% across the crucial
areas: Etch, Diffusion, and Deposition. The
model also produces high quality predictions for
the remaining tools. The model is in the process
of being implemented in the FAB to improve
scheduling, dispatching, and improve crucial Key
Performance Indicators (KPIs) such as cycle time
and throughput.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Supply Chain Management I
Chair: Douniel Lamghari-Idrissi (ASML, Eindhoven
University of Technology)
Data-driven Warehouse Planning and Control under
Stochastic Demand and Labor Supply in Semi-conductor
Capital Equipment Manufacturing
Douglas Morrice, Yanyue (Lilian) Ding, and Jonathan Bard
(The University of Texas at Austin)
Abstract
Abstract
Access to more information and sophisticated
analytics enables warehouse management to make
better data-driven decisions. In our study, we
develop a simulation-regression metamodel to
help warehouse managers plan workforce, space,
and equipment requirements for a leading
semiconductor capital equipment company. More
specifically, we use historical inbound and
outbound demand records and performance metrics
(such as workers’ hourly productivity and
moving rates) to predict the space, workforce,
and equipment required for different operation
stages in the warehouse facility. We implement
the simulation model in Python. Simulation
experiments provide insights on resource
planning under different demand scenarios and
supply constraints.
pdf
Assessing Delivery Commitments in Supply Chains: A
Matrix-Based Framework
Madhurima Vangeepuram (Hochschule Neu-Ulm), Hans Ehm and
Marco Ratusny (Infineon Technologies AG), Stefan
Faußer (Hochschule Neu-Ulm), and Stefan Heilmayer
and Tobias Leander Welling (Infineon Technologies AG)
Abstract
Abstract
Ensuring reliable and timely customer deliveries
is crucial to supply chain management. The
ability to meet delivery commitments is
essential for maintaining customer satisfaction.
Despite the importance of delivery commitments,
there is a lack of standard measurement
techniques for evaluating their quality.
Therefore, this paper introduces the term
Commitment Quality (CQ) and develops a CQ matrix
that can be used to measure the quality of
delivery commitments. The CQ matrix provides a
comprehensive set of quantitative measures to
evaluate different aspects of delivery
commitments. Finally, a numerical example based
on an order data sample from a semiconductor
manufacturer is presented and discussed. The
proposed framework aims to standardize the CQ,
enhancing transparency in delivery commitments.
pdf
The Bullwhip Effect in End-to-end Supply Chains: The
Impact of Reach-based Replenishment Policies with a
Long Cycle Time Supplier
Hans Ehm, Chun Hei Chung, Sanchari Kar Chowdhury, Marco
Ratusny, and Abdelgafar Ismail (Infineon Technologies
AG)
Abstract
Abstract
The bullwhip effect (BWE), a well-known
phenomenon in supply chain management since it
was first identified in 1958, is causing
significant economic damage after disruptions.
While the role of human factors in BWE has been
widely recognized, the impact of different
replenishment policies on BWE mitigation has not
been thoroughly investigated. This paper
presents a study on the impact of reach-based
Kanban systems on the BWE in supply chains
containing suppliers with intrinsically
non-reducible long cycle times, such as those in
the semiconductor industry. Our findings suggest
that a reach-based replenishment system acts as
a BWE accelerator after significant disruptions,
which can end up in line-downs downstream. We
propose a change to absolute stock targets for
replenishment policies during disruption to
mitigate this aspect of the BWE root cause for
supply chain with long cycle time suppliers to
reduce the risk of line downs.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Planning
Chair: Tobias Voelker (University of Hagen)
Decentralized Decision-making Framework for Managing
Product Rollovers in the Semiconductor
Manufacturing
Carlos Leca (North Carolina State University), Karl
Kempf (Intel Corporation), and Reha Uzsoy (North
Carolina State University)
Abstract
Abstract
Competitiveness in the semiconductor industry
requires continuous management of product
rollovers, the process of introducing new
products and retiring older ones to maintain
market share. This paper presents a
decentralized decision-making framework to
coordinate product rollover decisions using
Lagrangian decomposition of a centralized model
using quadratic coordination errors in the
subproblem objectives, and a decentralized
heuristic that recovers the feasible solutions
from the relaxed ones obtained from the
Lagrangian procedure. Experimental results show
that this decentralized framework delivers
promising results, obtaining near-optimal
solutions in modest CPU times.
pdf
Data-driven Production Planning Formulations with
Inventory Considerations
Tobias Voelker and Lars Moench (University of Hagen)
Abstract
Abstract
Data-driven (DD) production planning
formulations for semiconductor wafer fabrication
facilities (wafer fabs) are studied in this
paper. These formulations are based on a set of
system states representing the congestion
behavior of the wafer fab with work in process
and resulting output levels. We establish two DD
formulations with inventory considerations. The
first variant is a shortfall-based
chance-constrained formulation that considers
safety stocks at the finished goods inventory
level. The second variant is a simple
scenario-based stochastic program where the
objective function reflects the expected
inventory holding and backlog cost under
uncertainty. The two variants are compared with
the conventional DD formulation in a rolling
horizon environment using a simulation model of
a large-scaled wafer fab. The simulation
experiments demonstrate that the stochastic
program achieves the largest profit under all
experimental conditions.
pdf
Agent-based Decision Support in Borderless Fab
Scenarios in Semiconductor Manufacturing
Raphael Herding (Forschungsinstitut für
Telekommunikation und Kooperation, Westfälische
Hochschule) and Lars Moench (Forschungsinstitut für
Telekommunikation und Kooperation, University of Hagen)
Abstract
Abstract
The design and the implementation of a
multi-agent system (MAS) for a borderless fab
scenario is described. In such a scenario, lots
are transferred from one wafer fab to a nearby
one to perform process steps of the transferred
lots. Production planning is carried out
individually for each of the wafer fabs. The
modeling of the available and requested capacity
in the production planning models of the
participating wafer fabs is affected by the lot
transfer. The transfer of the route information
from one wafer fab to another to automatically
generate the linear programming models is
described. Production planning is carried out in
a rolling horizon setting using a cloud-based
infrastructure. We show by simulation
experiments with the MAS with a correct modeling
of the capacity in production planning results
in improved profit compared to a setting where
the lot transfer is not taken into account in
the planning formulations.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Supply Chain Management II
Chair: Hans Ehm (Infineon Technologies AG)
Component Redesigns and the Impact of their
Implementation Policy
Best Contributed Applied Paper - Finalist
Steffi Neefs and Douniel Lamghari-Idrissi (ASML
Netherlands B.V., Eindhoven University of Technology)
and Rob Basten and Geert-Jan van Houtum (Eindhoven
University of Technology)
Abstract
Abstract
An OEM who maintains a fleet of complex systems
strives for high system availability for its
customers. Frequently failing components lead to
system unavailability and high maintenance
costs. Consequently, the OEM might decide to
upgrade components. We develop a model that
quantifies the impact of the introduction of an
upgraded component on the OEM's costs and number
of failures to define the best implementation
strategy. Using a Markov process, we evaluate
four policies differing in the roll-out strategy
of new parts, either immediate or corrective,
and the phase-out strategy of old parts, either
rework or salvage. The model is used in a case
study at ASML. We conclude that, in the case
study, reworking is preferred over salvaging as
the phase-out strategy and corrective
replacements are generally preferred over
immediate replacements for the roll-out
strategy.
pdf
Exact and Heuristic Algorithms for a Bi-criteria
Order-lot Pegging Problem in a Multi-Fab Setting
Andreas Haspecker and Lars Moench (University of Hagen)
Abstract
Abstract
We study an order-lot pegging problem in
semiconductor supply chains. The problem deals
with assigning already released lots to orders
and with planning wafer releases to fulfill
orders if there are not enough lots in the wafer
fabs. The objectives are minimizing the total
tardiness of the orders and minimizing the total
cost. We are interested in computing the set of
Pareto-optimal plans. Based on a mixed-integer
linear formulation, a ϵ-constraint method
is proposed for small-sized problem instances.
Moreover, a non-dominated sorting genetic
algorithm (NSGA)-II algorithm is designed for
tackling larger problem instances within a
reasonable amount of computing time. We perform
computational experiments with the
ε-constraint method for small-sized problem
instances and with the NSGA-II scheme for small-
and medium-sized problem instances.
pdf
A Case Study for Modeling the Economics of Foundry
Operations
Larissa Nietner (LineLab, MIT); Parker Gould (InchFab);
and Scott Nill (LineLab, MIT)
Abstract
Abstract
This case study presents a novel approach for
modeling a fab, which allows for more rapid
results than traditional simulation, while
optimizing various variables like tool count or
throughput, and capturing equipment sharing
between co-produced devices. This modeling
method was applied at InchFab, a foundry that
uses ultra-small substrate sizes to allow for
more flexibility and lower costs when
fabricating small production quantities. The new
approach was used to find the cost-optimal rate
achievable for a primary product on certain tool
counts - and then the cost-optimal rate of a
secondary product, without any changes to
equipment count. Using novel types of analyses
and sensitivity figures, we demonstrate that it
can be economically sensible to add a product to
a fab that is already producing the cost-optimal
quantity of a base product. This is an important
finding, as some fabs consider offering
additional foundry services on existing
equipment.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Digital Twins and Simulation
Chair: Cathal Heavey (University of Limerick)
Digital Twin for Design and Analysis of Cluster Tool
in Wafer Fabrication
Joonick Hwang and Sang Do Noh (Sungkyunkwan University)
Abstract
Abstract
In the semiconductor industry, many retrofits
are being made to improve the production
efficiency of manufacturing facilities. However,
due to the nature of the data provided by the
cluster tool, which is a semiconductor
manufacturing facility, engineers have some
limitations in utilizing it. To address this
issue, it is necessary to introduce a digital
twin model that can verify the performance of
the semiconductor process cluster tool in a
virtual environment, and to apply optimal mass
production conditions based on this predictive
data in the operational stage. In this study, we
propose a digital twin model that visualize
congestion factors during wafer transfer and
evaluate the productivity of cluster tools.
pdf
A Study on the Impact of Lot Priorities Mix on Cycle
Times in Semiconductor Manufacturing
Adrien Wartelle, Stéphane
Dauzère-Pérès, and Claude Yugma (Ecole
des Mines de Saint-Etienne) and Quentin Christ and
Renaud Roussel (STMicroelectronics)
Abstract
Abstract
This paper presents a study on the priority mix
planning problem in semiconductor fabrication
using simulation. The objective of the study is
to analyze the impact of the mix of different
lot types associated with their priority on the
cycle time of the Implantation workshop. We have
specifically analyzed the waiting time lots and
the associated speed up or speed down on a
work-center. The tests were conducted using
Anylogic 8 on industrial instances from
STMicroelectronics Crolles. Results shows that a
speedup of more than 300% for high priority lots
and speed down of less than 10% is possible if
the proportion high priority lots is kept under
10%. This study initiates a first step toward a
better priority mix management which has a
strategic central place of in the semiconductor
industry.
pdf
Backward Simulation: A Customer-Focused
Diversification of Fab Simulation Applications in a
Highly Automated Semiconductor Production Line
Wolfgang Scholl and Patrick Preuß (Infineon
Technologies Dresden GmbH) and Christoph Laroque and
Madlene Leissau (University of Applied Sciences Zwickau)
Abstract
Abstract
In modern manufacturing environments, the
digital transformation to smart factories cannot
be achieved without data-driven methods like
discrete, event-driven simulation. This paper
provides an overview of existing current
simulation applications at Infineon Dresden in
this area, especially on short-term simulation
for production control and long-term simulations
to forecast process flows in the wafer
fabrication facilities. Furthermore, it
illustrates the current status of research
activities in the area of backward simulation
for operational decision support for order
scheduling by some latest research results.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Modeling Techniques in Semiconductor Manufacturing
Chair: Robert Dodge (Arizona State University)
Duplicate Reticles Management System
Sandar Kyaw, Ronald Taylor, and Jean Fakhoury
(GLOBALFOUNDRIES)
Abstract
Abstract
Duplicate reticles provide a fab with an
opportunity to mitigate the impact of
catastrophic reticle damage or the need for
offsite repair/cleaning and provide the
necessary capacity for products in a high volume
manufacturing environment. Implementation of a
management system for duplicate reticles helps
to maintain a minimum number of run paths while
ensuring availability of multiple reticles to
process lots simultaneously. Dedicating the
duplicate reticles each to a group of exposure
tools prevents duplicate reticles from ending up
on the same exposure tool, and managing this
dedication by tool/reticle inhibits has proven
to be an effective method of distributing the
WIP between the exposure tools while minimizing
the management of the layer supported by those
duplicate reticles.
pdf
A Testing Based Approach for Security Analysis of
Smart Semiconductor Systems
Robert Dodge, Giulia Pedrielli, and Petar Jevtić
(Arizona State University)
Abstract
Abstract
Digital factories have been recognized as a
paradigm with considerable promise for improving
manufacturing performance. Digital Twins have
emerged as a powerful tool to improve control
performance for large-scale smart manufacturing
systems. We argue that DT-based smart factories
are vulnerable to attacks that use the DT to
damage the system while remaining undetectable,
specifically in high-cost processes, where DT
technologies are more likely to be deployed. As
an instructive example, we consider smart
semiconductor processes with focus on
photolithography. To this end, we formulate a
static optimization problem to maximize the
damage of a cyber-attack against a
photolithography digital twin that minimizes
detectability to the process controller. Results
demonstrate that this problem formulation
provides attack policies that successfully
reduce the throughput of the system at trade off
of increased detectability to a common process
control technique. Results encourage more
research in the domain, especially to face
scalability and policy-like solutions.
pdf
Reusable Ontology Generation and Matching from
Simulation Models
Ming-Yu Tu, Hans Ehm, Abdelgafar Ismail, and Philipp
Ulrich (Infineon Technologies AG)
Abstract
Abstract
As simulating semiconductor manufacturing grows
complex, model reuse becomes appealing since it
can reduce the time incurred in developing
future models. Also, considering a large network
of the semiconductor supply chain, knowledge
sharing can enable the efficient development of
simulation models in a collaborative
organization. Such necessity of reusability and
interoperability of simulation models motivates
this paper. We will address these challenges
through ontological modeling and linking of the
simulation components. The first application is
generating reusable ontologies from simulation
models. Another discussed application is
ontology matching for knowledge sharing between
simulation components and a meta-model of the
semiconductor supply chain. The proposed
approach succeeds in automatically transforming
simulation into reusable knowledge and
identifying interconnection in a semiconductor
manufacturing system.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Scheduling II
Chair: Stephane Dauzère-Pérès (École
Nationale Supérieure des Mines de Saint-Étienne,
BI Norwegian Business School)
Industrial Multi-Objective Optimization of a Large
Complex Job-Shop in Semiconductor Manufacturing
Abdel Bitar and Sebastian Knopp (Planimize); Karim
Tamssaouet (Planimize, BI Norwegian School of
Management); Stéphane Dauzère-Pérès
(Ecole des Mines de Saint-Etienne); and Ludovic Delcloy
and Renaud Roussel (STMicroelectronics, Crolles)
Abstract
Abstract
This paper surveys the industrialization of an
advanced optimization engine that was developed
by Planimize and put into production in the
cleaning and diffusion work center of the most
advanced factory of a semiconductor
manufacturing company. Hundreds of lots
requiring several thousands operations in the
work center must be scheduled on about 150
machines, while taking complex constraints into
account, in particular hundreds of time
constraints, and optimizing a collection of
criteria. The optimization engine provides
significantly better results, runs significantly
faster, and can handle much larger problem
instances than the previous Constraint
Programming optimization engine used in the
factory.
pdf
Minimizing Makespan for a Multiple Orders Per Job
Scheduling Problem in a Two-stage Permutation
Flowshop
Rohan Korde and John Fowler (Arizona State University)
and Lars Mönch (FernUniversität in Hagen)
Abstract
Abstract
The scheduling problem we study in this paper is
known as a multiple orders per job (MOJ) (Mason
et al. 2004) problem which is encountered in a
few different industries including front-end
semiconductor manufacturing. We look at the MOJ
scheduling problem in a two-stage permutation
flowshop with some real-world constraints with
the goal of minimizing the makespan. We use a
MIP solver and various heuristics to solve this
NP-hard scheduling problem for various stage
configurations and bottleneck types. For
moj(ipm-ipm) the makespan was minimized by the
MIP solver regardless of the bottleneck type for
over 90% of the small-sized problem instances.
When the heuristics minimized the makespan, the
Slope heuristic was the fastest NEH heuristic
was the slowest for over 90% of the large-sized
problem instances.
pdf
Combining Time Series Data and Snapshot Data for
Situation Aware Dispatching in Semiconductor
Manufacturing
Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies
Pte Ltd) and Wentong Cai (Nanyang Technological
University)
Abstract
Abstract
Dispatch rules are commonly used to schedule
lots in the semiconductor industry. Previous
studies have indicated that adapting dispatch
rules can improve overall factory performance.
Machine learning has proven useful in learning
the relationship between manufacturing
situations and dispatch rules. However, using
only snapshot data at a given point in time to
generate features for these models does not
account for trends in the manufacturing
situation, which can be represented as time
series data. To address this issue, the proposed
method generates features from time series data
and combines them with features from snapshot
data to train machine learning models for
dispatch rule prediction. The results
demonstrate the effectiveness of this
methodology, as the combination of features from
both types of data achieves the highest
prediction accuracy. Simulation results show
that this approach can adapt the dispatch rule
according to the manufacturing situation and
achieve a comparable factory performance.
pdf
Technical Session · MASM: Semiconductor Manufacturing
MASM Keynote: Simulation, Optimization and AI for
Semiconductor Manufacturing and Supply Chains: Four
Decades of Progress and a Vision for the Future
Chair: Lars Moench (University of Hagen)
Simulation, Optimization and AI for Semiconductor
Manufacturing and Supply Chains: Four Decades of
Progress and a Vision for the Future
Hans Ehm (Infineon Technologies AG)
Abstract
Abstract
Semiconductor manufacturing and supply chain
processes are one of the most complex but can be
considered at the same time also as one of the
most rewarding processes in the world. In
thousands of detailed unit chemical and physical
processes in cleanrooms and under statistical
process control chips on wafers emerge and are
assembled and tested to components. The Modeling
and Analysis of Semiconductor Manufacturing
(MASM) conference embedded in the annual Winter
Simulation Conference (WSC) was, is, and will be
key to understand the optimization and
simulation challenges in this domain.
The operating curve management
targeting a low variability value and thus
enabling a low flow factor - thus speed - and
high utilization - thus a good cost position -
at the same time has been an early achievement.
With discrete-event, agent based, and system
dynamic simulations on the four levels (machine,
fab, internal and external supply chain)
solution options for complex interactions could
be proposed based on sophisticated mathematical
models running on simulation testbeds like the
MIMAC models and their successors. Accurate
planning and advanced scheduling, available to
promise (ATP) generation and usage with
traditional or artificial intelligence (AI) /
deep learning (DL) methods requires a huge
amount of real data or qualified synthetic data
(QSD).
The semantic web for
semiconductor and supply chain containing
semiconductors bears the potential to enable the
provision of those urgently needed QSD in
volume, (integrated) complexity and accuracy
needed. Quantum bit (qubit) based algorithm
could provide the speed for the next and
over-next generation for optimization and
simulation in our domain.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Panel: Semiconductor Manufacturing in Times of
Geopolitical Tensions
Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd)
Semiconductor Manufacturing in Times of Geopolitical
Tensions: How MASM Can Help with Making Supply Chains
More Resilient
Peter Lendermann (D-SIMLAB Technologies)
Abstract
Abstract
This panel assembles a number of prominent
representatives from industry and academia to
discuss how semiconductor supply chains in times
of increasing geopolitical risks can be made
more resilient through Modeling and Analysis of
Semiconductor Manufacturing (MASM) techniques
and enabling software solutions.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Data and Modeling Issues
Chair: Oliver Rose (University of the Bundeswehr
Munich)
Semiconductor Equipment Health Monitoring with
Multi-View Data
Jeongsun Ahn, Hong-Yeon Kim, Sang-Hyun Cho, and
Hyun-Jung Kim (Korea Advanced Institute of Science and
Technology) and Hongyeon Kim, Hyeonjeong Choi, and Dain
Ham (Wonik IPS)
Abstract
Abstract
Monitoring the state of semiconductor equipment
is crucial for ensuring optimal performance and
preventing downtime. In previous studies,
researchers have attempted to derive a health
index that represents the overall condition of
the equipment as a single index. However, these
studies have often relied solely on time-series
data from each sensor, neglecting other
important viewpoints engineers consider when
monitoring the equipment. To address this
limitation, we propose a multi-view data set
specifically designed for semiconductor
equipment, which incorporates process, trend,
and spatial data. In addition, we present a
framework for deriving a hierarchical health
index based on a multi-view data set. The
hierarchical structure is derived using a
hierarchical spectral clustering method, and an
autoencoder-based health index is used. We have
verified the effectiveness of our approach with
real data sets, demonstrating its potential as a
valuable tool for monitoring the condition of
semiconductor equipment.
pdf
Modeling Multivariate Relations in Multiblock
Semiconductor Manufacturing Data Using Process PLS to
Enhance Process Understanding
Geert van Kollenburg and Richard Verhoeven (Eindhoven
University of Technology), Daniele Pagano
(STMicroelectronics s.r.l.), and Mike Holenderski and
Nirvana Meratnia (Eindhoven University of Technology)
Abstract
Abstract
The complexity of manufacturing process data has
made it more challenging to extract useful
insights. Data-analytic solutions have therefore
become essential for analyzing and optimizing
manufacturing processes. Path modeling, also
known as structural equation modeling, is a
statistical approach that can provide new
insights into complex multivariate relationships
between process variables from different stages
of the manufacturing process. The incorporation
of expert process knowledge and subsequent
interpretation of model results can facilitate
communication between stakeholders, promoting
lean manufacturing and achieving the
sustainability goals of Industry 5.0. This paper
describes the use of a path modeling algorithm
called Process Partial Least Squares (Process
PLS) to gain new insights into the relationships
between equipment data from several machines
within the semiconductor manufacturing process.
The methods used in this study can assist
manufacturers in understanding the relations
between different machines and identify the most
influential variables that may be used to
develop soft-sensors.
pdf
Multi-Resolution Modeling Method for Automated
Material Handling System Systems in Semiconductor
FABs
Kwanwoo Lee, Woosung Jeon, and Sangchul Park (Ajou
University)
Abstract
Abstract
This paper presents a novel modeling framework
for semiconductor fabrication facilities (FABs)
that integrates production and material handling
systems. Because the productivity of
semiconductor FABs is significantly influenced
by their material-handling systems, existing
research has focused on optimizing operational
logic considering both aspects. However, the
scale and complexity of modern FABs make
implementation of fully integrated models
challenging, resulting in slow simulation speeds
for long periods. To address this issue, we
propose a multi-resolution modeling framework
that creates material-handling system models at
two distinct resolution levels, enabling fast,
fully integrated FAB models while accounting for
material-handling effects. Experimental results
demonstrated accelerated simulation completion
compared to single-resolution models while
maintaining consistent results. The proposed
method provides a practical approach for
semiconductor FABs to investigate long-term
phenomena and urgent decision-making problems
while considering both production and
material-handling systems.
pdf
Technical Session · MASM: Semiconductor Manufacturing
Machine Learning Applications
Chair: John Fowler (Arizona State University)
A Self-supervised Learning Based Framework for
TFT-LCD Defect Classification
Sheng-Xiang Kao (International Intercollegiate Ph.D.
Program, National Tsing Hua University); Yu-Hsun Lin
(Department of Industrial Engineering and Engineering
Management, National Tsing Hua University); and Chen-Fu
Chien (Intelligent Manufacturing and Circular Economy
Research Center, National Tsing Hua University)
Abstract
Abstract
This study presents a self-supervised learning
based framework for TFT-LCD defect
classification in semiconductor smart
manufacturing. Utilizing the Swapping
Assignments between Views (SwAV) model trained
on 1,000,000 unlabeled TFT-LCD images, the
framework achieves an overall top-1 accuracy of
0.709 and precision of 0.7812 in downstream task
of classifying 13 types of TFT-LCD defects.
Compared to using SwAV pre-trained weighs on
ImageNet, proposed domain-specific
self-supervised learning model significantly
outperforms, emphasizing the importance of
domain-specific training. The framework offers
manufacturers a cost-efficient decision support
system, enhancing TFT-LCD defect classification
quality.
pdf
Root Cause Analysis in Supply Chain Planning Using
Explainable Machine Learning
Pavle Kecman, Josephine Fang, and Ana Glaser (NXP
Semiconductors)
Abstract
Abstract
In the highly dynamic world of semiconductor
manufacturing, planning analysts are asked to
analyze variations between weekly production
plans with the goal of identifying a resolution
in a landscape involving elaborate optimization
models with significant interdependence between
data elements. We propose a solution to
effectively analyze the weekly planning engine
output and identify the data elements with
significant contribution to the outcome. An
explainable Machine Learning model is trained
and deployed to simulate the behavior of the
planning engine. Each model execution can be
explained to identify the features with the most
significant contribution to prediction. The
resulting application contributes to a timely
resolution to the production plan deviation,
while generating significant productivity gains.
pdf
Scaling Deep Reinforcement Learning for Queue-time
Management in Semiconductor Manufacturing
Harel Yedidsion, Prafulla Dawadi, David Norman, and
Emrah Zarifoglu (Applied Materials)
Abstract
Abstract
Queue-Time Constraints (QTCs) set a maximum
waiting time for lots between consecutive
process steps. In semiconductor manufacturing,
exceeding these limits results in yield loss,
rework, or scrapping. Managing QTCs is
challenging due to the need for lots to wait
until there is available capacity for the final
step. Specifically, accurately calculating the
capacity is computationally expensive, making it
difficult to handle large instances. Our
research addresses the scalability of QTC
management in real fabs with numerous
constraints. We propose a deep Reinforcement
Learning (RL) solution to handle lot release
into the QTC. We describe the infrastructure
developed for RL training using actual fab data,
assess the performance of our RL approach, and
compare it to three baseline solutions. Our
empirical evaluation demonstrates that the RL
method surpasses the baselines in key
performance metrics including queue-time
violations, while requiring negligible online
compute time.
pdf
Military and National Security Applications
Track Coordinator - Military and National Security
Applications: Clay Koschnick (Air Force Institute of Technology), James
Starling (U.S. Military Academy)
Technical Session · Military and National Security Applications
Military Keynote: Creating Live Virtual Constructive
Environments to Evaluate Human and System Resilience
Chair: James Starling (U.S. Military Academy)
Creating Live Virtual Constructive Environments to
Evaluate Human and System Resilience
Imre Balogh (Naval Postgraduate School)
Abstract
Abstract
Live Virtual Constructive (LVC) exercises are
becoming ubiquitous for training and mission
rehearsal in the military domain because the use
of LVC provides the most realistic environment
available short of actual military operations.
The mixture of live exercises with simulated
components (constructive simulations and virtual
simulators) allows for the creation of a context
for the training or rehearsal that is richer and
more representative of the real world than would
be possible with only live events. This ability
to embed live activity into synthetic
environment to provide realism has attracted the
interest of the Test and Evaluation community
(T&E) and recently there are increasing efforts
to start including LVC in the T&E tool suite.
This talk will discuss some of the work we have
been doing at the Naval Postgraduate School with
LVC and how these environments can be used to
assess and improve system and human resilience
in operational environments.
pdf
Technical Session · Military and National Security Applications
Enhancing Military Decision-Making: Strategies for Success
Chair: Mehdi Benhassine (Royal Military Academy)
Incorporation of Military Doctrines and Objectives
into an AI Agent via Natural Language and Reward in
Reinforcement Learning
Michael Möbius, Daniel Kallfass, and Matthias Flock
(Airbus Defence and Space GmbH) and Thomas Doll and
Dietmar Kunde (German Armed Forces)
Abstract
Abstract
This paper emphasizes the integration of sound
tactical behavior in the generation of realistic
military simulations, which includes the
definition of combat tactics, doctrine, rules of
engagement, and concepts of operations. Recent
advances in reinforcement learning (RL) enable
RL agents to generate a wide range of tactical
actions. A multi-agent ground combat scenario is
used in this paper to demonstrate how a machine
learning (ML) application generates strategies
and issues commands while following a given
objective. Natural language is used to issue
doctrines and objectives to improve
communication between the human advisor and the
ML agent. This allows us to embed objectives and
existing doctrines into the reasoning of an
artificial intelligence (AI). The research
demonstrates the successful integration of
natural language to enable an agent to achieve
different objectives. This groundwork will
enhance RL agents' ability in the future to
uphold the doctrines and rules of military
operations.
pdf
Accounting for Individual Shooting Skills in Combat
Models
Vikram Mittal and Paul F. Evangelista (United States
Military Academy)
Abstract
Abstract
There is significant variation in shooting
ability among U.S. Army soldiers, which is often
overlooked in combat simulations. This study
introduces a Monte-Carlo model to estimate the
dispersion of a soldier's shot group based on
their marksmanship score. This model is used to
assess the impact of marksmanship on a squad's
performance through two analyses. The first
analysis employs a dueling model to examine
various marksmanship skills between dueling
teams, offering insights into overmatch
requirements. The second analysis uses an
agent-based combat simulation to investigate the
influence of marksmanship on squad performance
in a dueling scenario in addition to tactical
rural and urban missions. The results reveal
that marksmanship becomes increasingly crucial
in enhancing lethality and survivability as the
distance between combatants grows. Notably,
superior marksmanship skills are particularly
vital in offensive, rural operations. These
findings emphasize the significance of
marksmanship and its implications for military
requirements and tactical decision-making.
pdf
Technical Session · Military and National Security Applications
Protection: Modeling Mass Casualty Incidents
Chair: David Beskow (United States Military Academy)
Open-Air Artillery Strike in a Rural Area: A
Hypothetical Scenario
Mehdi Benhassine (Royal Military Academy); Ruben De
Rouck, Michel Debacker, and Ives Hubloue (Vrije
Universiteit Brussel); Erwin Dhondt (DO Consultancy);
John Quinn (Charles University); and Filip Van
Utterbeeck (Royal Military Academy)
Abstract
Abstract
The escalation of the Russian invasion in
Ukraine, characterized by the deployment of
conventional weapon systems, inflicts
significant morbidity and mortality on the
victims. It is imperative to ascertain optimal
medical practices and disaster response
strategies throughout the battlefield to
minimize casualties and safeguard the well-being
of medical and disaster responders. The
challenges posed by large-scale battlefield
threats can rapidly overwhelm healthcare
providers due to the sheer number of victims,
which can result in the depletion of medical
supplies and insufficient training and
resources. To address these issues, we utilized
the SIMEDIS simulator to establish and implement
a battlefield scenario involving an open-air
artillery strike in a field. Mortality rates
were calculated based on the application of
bleeding control measures and the distribution
policy for allocating victims to medical
treatment facilities. Controlling hemorrhage
remains the most crucial factor influencing
mortality outcomes.
pdf
A Modular Simulation Model for Mass Casualty
Incidents
Kai Meisner (Bundeswehr Medical Academy, University of
the Bundeswehr Munich) and Heiderose Stein, Nadiia
Leopold, Tobias Uhlig, and Oliver Rose (University of
the Bundeswehr Munich)
Abstract
Abstract
During military conflicts, the number of
casualties is likely to exceed medical
capabilities. For best treatment results, the
patients must be distributed according to their
needs to the available resources such as medical
facilities and means of transportation. Computer
simulations are used to verify and optimize
current medical planning. However, recent models
lack the capability of testing a wide range of
decision rules. In this paper, we address this
issue and propose a modular simulation concept
whose components can be adapted and exchanged
independently. Using modular submodels to
control the simulated objects, we enable the
implementation of a wide range of object
behavior. A prototype implementation of the
proposed concept is presented, showing the
effects of applying different dispatching rules
in an evacuation scenario.
pdf
Technical Session · Military and National Security Applications
Optimizing Aerial Operations: Advancements in Air Mission
Planning
Chair: Nicholas Shallcross (U.S. Army, University of
Arkansas)
Implementing Efficient Dynamic Threat Avoidance
Routing Based on Dijkstra's Shortest Path Algorithm in
the Advanced Framework for Simulation, Integration,
and Modeling (AFSIM)
Dante Reid, Lance Champagne, and Nathan Gaw (Air Force
Institute of Technology)
Abstract
Abstract
Simulating pre-planned routes and dynamic threat
avoidance routing represents a significant
problem for operations analysts. Without methods
to create operationally valid routes through
automation, the analyst is generally faced with
hard coding individual routes for multiple
aircraft over the entirety of the mission set.
This research developed, implemented, and
analyzed threat avoidance routing based on
Dijkstra's algorithm for aircraft attempting to
operate in an anti-access area denial (A2AD)
environment capable of dynamically updating the
mission route as new threat information is
learned. A designed experiment was conducted to
determine the impact of grid parameters on
operational effectiveness metrics and
computational costs. Statistical analysis
results show that the proposed algorithm
produced the best operational performance with
grid spacing set to 50% of the smallest surface
to air missile (SAM) threat radius without
incurring prohibitive computational costs.
pdf
Simulation-Based Optimization of Air Force Mission
Planning
Best Contributed Applied Paper - Finalist
Mihaela Lechner and Alexander Roman (University of the
Bundeswehr Munich), Thomas Mayer (ESG Elektroniksystem-
und Logistik-GmbH), and Tobias Uhlig and Oliver Rose
(University of the Bundeswehr Munich)
Abstract
Abstract
Military planning operations deal with highly
dynamic environments and a variety of complex
optimization challenges. In order to support
decision-makers in this process, innovative
concepts are required that can automatically
generate applicable solutions for certain
aspects of mission planning. Such instruments
can simplify the planning process, reduce risks,
and lower operating costs. This paper presents a
simulation-based optimization framework that
addresses three problems in the context of
aerial warfare planning: task assignment,
scheduling, and route planning. These problems
are tackled with interconnected heuristics based
on either greedy approaches or genetic
algorithms. Additionally, hierarchical task
networks are employed to incorporate domain
knowledge in form of tactical doctrines into the
solution. Our simulation results confirm the
viability of the proposed approach for small to
medium-sized scenarios. However, further
investigation with regard to the evaluation
function and the simulation environment is
required.
pdf
Discrete Event Simulation of Aircraft Sortie
Generation on an Aircraft Carrier
Hee Chang Yoon and Seung Heon Oh (Seoul National
University); Jung-Hoon Chung, Hyuk Lee, and Sun-Ah Jung
(Korea Institute of Machinery & Materials); and Jong Hun
Woo (Seoul National University)
Abstract
Abstract
The Sortie Generation Rate (SGR) which refers to
the number of sorties that can be generated per
unit time, is a key indicator for evaluating the
ability of an airbase. However, an aircraft
carrier has many constraints compared to a
land-based airbase, such as spatial and
environmental constraints, making it difficult
to apply existing land-based research to analyze
aircraft carrier operations. On the other hand,
the Sortie Generation Process (SGP) on an
aircraft carrier is similar to a
logistics/production system in that sorties are
generated through aircraft. Therefore, this
study proposes a framework for analyzing the SGP
on an aircraft carrier using discrete event
simulation and defines the classes that make up
the simulation. In addition, SGP analysis
simulations were implemented using the proposed
framework and several experiments were performed
to demonstrate the feasibility of applying the
proposed framework in practice.
pdf
Technical Session · Military and National Security Applications
Improving Cyber and Information Warfare Operations
Chair: Josiah Steckenrider (United States Military
Academy)
The Holistic Prioritized SATCOM Throughput
Requirements (HPSTR) Stochastic Model
Matthew Wesloh, Noelle Douglas, Brianne White, and
Nicholas Shallcross (United States Army, The Research
and Analysis Center)
Abstract
Abstract
The U.S. Army's command and control
modernization efforts rely upon an
expeditionary, mobile, hardened, and resilient
network. Dispersed network access and data
availability are central to increasing the
operational speed required for effective command
and control. The Army must define its satellite
communication (SATCOM) requirements to support
network modernization. This paper proposes the
Holistic Prioritized SATCOM Throughput
Requirements (HPSTR) simulation that prioritizes
and adjudicates SATCOM throughput requirements
for operational military units. Additionally,
the simulation evaluates the impact of a
contested, degraded, and operationally limited
(CDO) communication environment on force
effectiveness. HPSTR addresses knowledge gaps
concerning U.S. Army SATCOM activities in a
large-scale combat operation (LSCO) to inform
modernization decisions.
pdf
Using Simulated Narratives to Understand Attribution
in the Information Dimension
Elijah Bellamy and David Beskow (United States Military
Academy)
Abstract
Abstract
Conducting a measured response to cyber or
information attack is predicated on attribution.
When these operations are conducted covertly or
through proxies, uncertainty in attribution
limits response options. To increase attribution
certainty in the information dimension, the
authors have developed a suite of supervised
machine learning models that attribute an
emerging narrative to historical narratives from
known actors. These models were first developed
on simulated narratives produced with a Large
Language Model. Once the supervised
classification models were developed and tested
on the simulated narratives, they are evaluated
on known actor social media narratives from
three known actors. The attribution models are
language agnostic and offer one-vs-rest and
multi-class options. All models performed at
relatively high accuracy and can provide
decision support for cyber response decisions.
pdf
Uncertainty-Quantified, Robust Deep Learning for
Network Intrusion Detection
Joshua Wong, Alexander Berenbeim, David Bierbrauer, and
Nathaniel Bastian (United States Military Academy)
Abstract
Abstract
Cyber threats are moving beyond human
comprehension and reaction capability in a
rapidly evolving world. Deep learning models for
network intrusion detection are becoming
evermore crucial in processing network traffic
to filter benign content from malicious
activity. However, novel attacks such as
zero-days are becoming more frequent,
demonstrating the need for robust deep learning
models to flag attacks while providing
predictive certainty guarantees. Therefore,
detecting out-of-distribution (OOD) inputs at
inference time is crucial to address the rapidly
changing environment while keeping up with
evolving cyber threats. We develop multi-class
deep learning models for network intrusion
detection, comparing deterministic with Bayesian
neural networks estimated using Hamiltonian
Monte Carlo. We also propose new uncertainty
quantification scoring measures for performance
evaluation to evaluate certainty in predictions.
During our experimentation, our best performing
proposed Bayesian deep learning model detected
89.1% and 86.9% of the OOD packets at the 5% and
0.1% significance levels, respectively.
pdf
Technical Session · Military and National Security Applications
Simulating Search and Naval Operations
Chair: Lance Champagne (AFIT)
A Comparison of Lissajous Curves to Traditional
Patterns in Aerial Search Simulations
Mitchell J. Miller, Victor E. Trujillo, James E. Bluman,
and J. Josiah Steckenrider (United States Military
Academy)
Abstract
Abstract
Technological advancements have made autonomous
aerial search using unmanned systems a promising
approach to search and rescue, targeting, and
other mission sets. A handful of standard flight
paths are traditionally used for aerial search,
but this research presents the Lissajous pattern
as an alternative to these traditional paths
that could potentially locate targets more
quickly. This research considers a searching
agent with imperfect detection capability and
leverages Monte Carlo simulations to generate
data for various flight paths. Each flight path
is evaluated by cumulative density functions
representing the time it takes an unmanned
aircraft system (UAS) to reach some desired
percent certainty of locating a randomly
generated target in a search area. Results show
that Lissajous curves are viable search paths
for superior aerial target detection,
particularly for evasive targets in a Reciprocal
Gaussian sampling distribution.
pdf
Naval Combat Wargame Simulation for Susceptibility
Analysis
Gun-Woong Byun and Seung-Heon Oh (Seoul National
University, Department of Naval Architecture and Ocean
Engineering); Jong-Ho Nam (Korea Maritime & Ocean
University, Division of Naval Architecture and Ocean
Systems Engineering); and Jong Hun Woo (Seoul National
University, Department of Naval Architecture and Ocean
Engineering)
Abstract
Abstract
An engagement between naval ships is defined as
a multi-agent system with multiple ships
interacting. Because of the limitations of
conducting and analyzing engagement, it is
common to use modeling and simulation or wargame
simulations. Most of the existing wargame
simulation studies focus on simulation
frameworks rather than real-world applications
and tend to focus on the evaluation of single
entities that comprise a wargame. Thus, this
study improves the reality of the simulation by
modeling objects that constitute a complex
engagement situation based on the simulation
framework. In addition, developed analytical
tools to automate and accelerate Monte Carlo
simulations of engagement-level wargames that
require large numbers of human and time
resources. The developed simulations enable the
application of various engagement scenarios to
evaluate strategies and tactics. Furthermore,
experiments are possible while altering the
design parameters of the naval ship, which
allows for the evaluation of the ship's
performance in combat.
pdf
Track Coordinator - Modeling Methodology: Rodrigo Castro (ICC-CONICET, Universidad de Buenos Aires),
Andrea D'Ambrogio (University of Roma TorVergata), Gerd
Wagner (Brandenburg University of Technology), Gabriel
Wainer (Carleton University)
Technical Session · Modeling Methodology
Complex Systems
Chair: Margaret Loper (Georgia Tech Research Institute)
Towards an Automatic Construction of Simulation
Scenarios: A Systematic Review
Christopher W.H. Davis (Microsoft), Antonie J. Jetter
(Portland State University), and Philippe J. Giabbanelli
(Miami University)
Abstract
Abstract
A predictive simulation is built on a conceptual
model (e.g., to identify relevant constructs and
relationships) and serves to estimate the
potential effects of `what-if' scenarios.
Developing the conceptual model and plausible
scenarios has long been a time-consuming
activity, often involving the manual processes
of identifying and engaging with experts, then
performing desk research, and finally crafting a
compelling narrative about the potential futures
captured as scenarios. Automation could speed-up
these activities, particularly through text
mining. We performed the first review on
automation for simulation scenario building.
Starting with 420 articles published between
1995 and 2022, we reduced them to 11 relevant
works. We examined them through four research
questions concerning data collection, extraction
of individual elements, connecting elements of
insight and (degree of automation of) scenario
generation. Our review identifies opportunities
to guide this growing research area by
emphasizing consistency and transparency in the
choice of datasets or methods.
pdf
Evolving LVC to Include Evaluation of Human-AI
Teaming Dynamics
Margaret Loper and Valerie Sitterle (GTRI)
Abstract
Abstract
There are significant differences between using
systems as human-controlled tools to accomplish
a specific task and using systems designed to
“cooperate and partner” with humans
to achieve capabilities beyond either side
acting alone. The live, virtual, constructive
(LVC) paradigm increasingly emphasized by the
DoD has wide acceptance and is congruent with
how the military thinks about training,
evaluation, and mission rehearsal. Consequently,
it may help address these challenges. This paper
aims to overview the current LVC construct,
challenges associated with human-AI teaming and
intentional design of these dynamics to achieve
new capabilities, and the resulting need to
evolve the LVC construct to improve our pursuit
of understanding and evaluation that leads to
effective fielding.
pdf
How to Combine Models? Principles and Mechanisms to
Aggregate Fuzzy Cognitive Maps
Ryan Schuerkamp and Philippe J. Giabbanelli (Miami
University) and Umberto Grandi and Sylvie Doutre
(Université Toulouse Capitole)
Abstract
Abstract
Fuzzy Cognitive Maps (FCMs) are graph-based
simulation models commonly used to model complex
systems. They are often built by participants
and aggregated to compare the viewpoints of
homogenous groups (e.g., anglers and ecologists)
and increase the reliability of the FCM.
However, the default approach for aggregation
may propagate the errors of an individual
participant, producing an aggregate FCM whose
structure and simulation outcomes do not align
with the system of interest. Alternative
aggregation methods exist; however, there are no
criteria to assess the quality of aggregation
methods. We define nine desirable criteria for
FCM aggregation algorithms and demonstrate how
three existing aggregation procedures from
social choice theory can aggregate FCMs and
fulfill desirable criteria, enabling the
assessment and comparison of FCM aggregation
procedures to support modelers in selecting an
aggregation algorithm. Moreover, we classify
existing aggregation algorithms to provide
structure to the growing body of aggregation
approaches.
pdf
Technical Session · Modeling Methodology
Modeling Methods
Chair: Gabriel Wainer (Carleton University)
A Low-Code Approach for Simulation-based Analysis of
Process Collaborations
Paolo Bocciarelli and Andrea D'Ambrogio (University of
Rome Tor Vergata)
Abstract
Abstract
The simulation-based analysis of process
collaborations introduces significant
challenges, such as the ability to focus on the
interchange of information and data without
disclosing any internal details of collaboration
participants' processes. The use of distributed
simulation (DS) provides good opportunities to
face these challenges. However, properly using
DS standards and technologies requires
significant technical know-how and effort. This
paper introduces a largely automated approach to
carry out distributed simulations of process
collaborations. The DS standard addressed by the
paper is the High Level Architecture (HLA),
which is used to analyze process collaborations
specified by using the Business Process Model
and Notation (BPMN). The degree of automation is
obtained by using a low-code development
paradigm based on automated model
transformations that reduce the amount of manual
effort required to code the HLA-based
simulation. An example application is also
discussed to underline the pros and cons of the
proposed approach.
pdf
Incremental Transformation of BPSIM-enriched BPMN
Models into DEVS
Mariane El Kassis, Francois Trousset, Gregory
Zacharewicz, and Nicolas Daclin (IMT Mines Alès)
Abstract
Abstract
In this paper, we introduce a novel methodology
for business process simulation, focusing on the
incremental transformation of Business Process
Modeling and Notation (BPMN) models enriched
with Business Process Simulation Interchange
Standard (BPSIM) elements into the Discrete
Event System Specification (DEVS) formalism. The
proposed method enhances the precision and
consistency of simulations by systematically
converting BPMN components and BPSIM
characteristics into DEVS representations, using
adaptable rules and templates. A major
contribution of this work is the introduction of
the Interaction Intermediate Model (I2M), a
model that provides a visually lucid
representation with significant semantics,
effectively encapsulating BPMN and BPSIM
simulation aspects. The resulting DEVS model
ensures accurate, reliable, and interoperable
simulations. We provide a thorough analysis of
this methodology, emphasize its advantages, and
validate its efficiency through a case study.
This method, applicable across various sectors
effectively bridging the gap between conceptual
modeling and simulation methodologies.
pdf
An Approach Towards Predicting the Computational
Runtime Reduction from Discrete-event Simulation Model
Simplification Operations
Mohd Shoaib (Indian Institute of Technology Delhi),
Navonil Mustafee (University of Exeter), and Varun
Ramamohan (Indian Institute of Technology Delhi)
Abstract
Abstract
Model simplification is the process of
developing a simplified version of an existing
discrete-event simulation (DES) to study the
performance of specific system subcomponents
relevant to the analysis. The simplified model
is referred to as a 'metasimulation'. A widely
used model simplification operation is
abstraction, which involves replacing the
subcomponents, not core to the analysis, from
the parent DES model with random variables
representing the lengths of stay in said
subcomponents. However, the one-time
computational cost of developing metasimulations
via abstraction can itself be considerable, as
the approach necessitates executing the parent
model for generating the necessary data for
developing the metasimulation. Thus, this study
proposes a queuing-theoretic approach for
estimating the computational runtime reduction
(CRR) achieved through abstraction, wherein the
prediction of CRR precedes the development of
the metasimulation. Towards this, we present
preliminary results from applying this approach
for simplification of DES models made up of
M/M/n workstations.
pdf
Technical Session · Modeling Methodology
Panel: Forty Years of Event Graphs in Research and
Education
Chair: Gerd Wagner (Brandenburg University of
Technology)
Forty Years of Event Graphs in Research and
Education
Murat M. Gunal (Fenerbahce University); Yahya Ismail
Osais (King Fahd University of Petroleum and Minerals,
Interdisc. Research Center for Intellig. Secure
Systems); Lee Schruben (University of California,
Berkeley); Gerd Wagner (Brandenburg University of
Technology); and Enver Yücesan (INSEAD)
Abstract
Abstract
Forty years ago, in 1983, Lee Schruben proposed
the Event Graph formalism and modeling language,
subsequently defining the paradigm of
Event-Based Simulation, in a precise way, which
had been pioneered 20 years before by SIMSCRIPT.
The purpose of this panel is for a group of
Event Graph researchers both from Operations
Research and Computer Science, including the
inventor of Event Graphs and one of his former
PhD students who has made essential
contributions to their theory, to discuss their
views on the history and potential of Event
Graph modeling and simulation. In particular,
the adoption of Event Graphs as a discrete
process modeling language in Discrete Event
Simulation and in Computer Science, and their
potential as a foundation for the entire field
of Discrete Event Simulation and for the fields
of process modeling and AI in Computer Science
is debated.
pdf
Technical Session · Modeling Methodology
DEVS
Chair: Hessam Sarjoughian (Arizona State University)
A Context-Free Grammar for Generating Full Classic
DEVS Models
María Julia Blas and Silvio Gonnet (INGAR
(CONICET-UTN)) and Doohwan Kim and Bernard Zeigler
(RTSync Corp.)
Abstract
Abstract
Existing grammars generate Finite Deterministic
DEVS models, a restricted subset of DEVS. The
proposed context-free grammar generates the
unrestricted set of Classic DEVS models. The
grammar is implemented in ANTLR, a powerful
parser generator for reading, processing,
executing, or translating structured text or
binary files. ANTLR enables the efficient
processing of the specifications needed for
generating members of Classic DEVS with ports.
Applications include an easier introduction to
DEVS for students and easier translation between
different DEVS implementations.
pdf
CLAVS/ODVS: Combining Class/Object Diagrams and
DEVS
Jordan Parezys and Randy Paredis (University of Antwerp)
and Hans Vangheluwe (University of Antwerp, Flanders
Make)
Abstract
Abstract
The Discrete Event System Specification (DEVS)
formalism is a modular discrete-event modeling
formalism. It has a formal specification in
terms of systems theory and is supported by
several efficient and usable simulator
implementations. In these implementations, the
DEVS formalism is often “grafted”
onto an existing Object-Oriented programming
language. Examples are C++ in the case of ADEVS
and Python in the case of PythonPDEVS. To match
this grafting, we present CLAVS, the CLAss
diagram and deVS formalism and its instance
counterpart ODVS, the Object Diagram and deVS
formalism, and their visual notations. These
languages use an automaton-like visual notation
for Atomic DEVS models and a Class Diagram
notation augmented with port information and
event structure specification. An implementation
of a visual CLAVS/ODVS modeling environment
built on draw.io is presented. The use and
usefulness of the formalism is demonstrated by
means of a simple traffic model whose detailed
specification is presented.
pdf
Project Simulation, Validation and Deployment with
DEVS: IoT Framework for Blooms Monitoring and
Alert
Segundo Esteban, Giordy A. Andrade, José L.
Risco-Martín, Jesús Chacón, and Eva
Besada-Portas (Complutense University of Madrid)
Abstract
Abstract
Harmful Algal and Cyanobacterial Blooms (HABs)
constitute a relevant public health and
ecological hazard due to their frequent
production of toxic metabolites, which is
increased by the current vulnerability of water
resources to environmental changes such as
global warming, population growth, and
eutrophication. These blooms have been typically
assessed by combining predictive models with
manual collection. However, these processes are
generally independent and do not provide data
with sufficient resolution to apply proactive
policies. In this work, we propose a novel and
integrative framework to straightforwardly
combine the conception, design, and deployment
of advanced Early-Warning Systems (EWSs) that
will allow us to automate all the processes
involved in HABs detection and management and
apply proactive policies. The framework is built
upon solid Modeling and Simulation (M&S)
principles, through Model Based Systems
Engineering (MBSE) as the driving methodology
and Discrete Event System Specification (DEVS)
as the M&S formalism.
pdf
Technical Session · Modeling Methodology
Digital Twins
Chair: Claudia Szabo (University of Adelaide, The
University of Adelaide)
Automated Simulation and Virtual Reality Coupling for
Interactive Digital Twins
Kai Franke, Jan Marius Stürmer, and Tobias Koch
(German Aerospace Center (DLR), Institute for the
Protection of Terrestrial Infrastructures)
Abstract
Abstract
While there are many efforts to simulate
technical systems in virtual environments and
provide a visual interaction for applications
such as training, authoring and analysis, the
process of generating applications still
requires a lot of manual work. This is
particularly critical in the context of
interactive Digital Twins for resilience, where
uncertain events can occur and every malfunction
or mistreatment of any part of the system needs
to be modeled. This paper presents an approach
to model such systems in a modular way by
automating the generation of its components for
a game engine and simulators based on a common
specification. Component instances are then
synchronized bidirectionally across applications
to achieve interaction between the game engine
and simulators. An example hydraulic system is
implemented and tested to demonstrate our
approach, which needs minimal manual work by
using predefined components. The solution can be
extended by integrating more components and
simulations.
pdf
Cityscape: A City-level Digital Twin Model Generator
for Simulation & Analyses
Dhananjai M. Rao (Miami University)
Abstract
Abstract
Cities and large urban areas face a myriad of
challenges ranging from city planning,
developing sustainable transportation, managing
natural catastrophes, and mitigating
communicable diseases. Addressing these
challenges requires effective analysis and
planning which in turn necessitates the use of
sufficiently detailed models or "digital twins."
Such detailed models that embody multifaceted
demographic and city characteristics are
challenging to generate. This paper presents our
ongoing work to develop a novel model generation
method and software suite called Cityscape, that
fuses diverse real-world data sets to generate a
digital twin for a given city. Specifically, our
method combines data from authoritative sources
including PUMS, PUMAs, and OpenStreet Map to
generate the digital twin. We have used the city
of Chicago (IL, USA) as a case study to verify
and validate (with ~85% confidence) our proposed
method.
pdf
Microscopic Vehicular Traffic Simulation: Toward
Online Calibration
Yulong Wang and John Miller (University of Georgia) and
Casey Bowman (University of North Georgia)
Abstract
Abstract
The modern world requires accurate and efficient
traffic modeling to facilitate commerce and
ensure citizens' safety. Traffic simulations
play an important role in this endeavor by
allowing traffic engineers to test traffic
systems and policies before implementing them.
This requires traffic simulation models that
have the ability to accurately represent
real-world traffic systems, and which are also
capable of re-calibrating model parameters when
needed through online calibration. This work
presents four contributions toward this
endeavor. The data science system ScalaTion was
extended with agent-based modeling and makes use
of virtual threads for each vehicle, which
improves the efficiency of simulations. The
modeling, simulating, and data loading schema
were all optimized to enhance the system
performance as well. Additionally, a new arrival
model strategy was implemented improving the
accuracy of the model calibration phase.
pdf
Technical Session · Modeling Methodology
Modeling Languages
Chair: Andrea D'Ambrogio (University of Roma
TorVergata)
FACT: A Domain Specific Language Based on a
Functional Algebra for Continuous Time Modeling
Edil G. Medeiros, Eduardo Lemos, and Eduardo Peixoto
(Universidade de Brasília)
Abstract
Abstract
Hybrid and cyber-physical systems create synergy
by combining digital modules with analog
implementations of signal processing operations
typically implemented in the digital domain. We
propose a domain-specific language (DSL),
so-called FACT – Functional Algebra for
Continuous Time, based on the algebraic
properties of the General Purpose Analog
Computer (GPAC), a theoretical model of
computation recently updated as a continuous
time equivalent of the Turing Machine. We lift
the GPAC to a continuous time dynamics inside a
black box semantics for understanding hybrid
systems, which allows us to redefine continuous
time semantics inspired by the functional
reactive programming style. FACT leverages the
type class mechanism from the Haskell functional
programming language to implement operators that
capture the proposed continuous time semantics.
An speed-optimized working open-source
implementation in the Haskell functional
language is provided and was used to demonstrate
how the language supports modeling and
simulation.
pdf
Transforming Discrete Event Models to Machine
Learning Models
Hessam S. Sarjoughian, Forouzan Fallah, and
Seyyedamirhossein Saeidi (Arizona State University) and
Edward J. Yellig (Intel Corporation)
Abstract
Abstract
Discrete event simulation, formalized as
deductive modeling, has been shown to be
effective for studying dynamical systems.
Development of models, however, is challenging
when numerous interacting components are
involved and should operate under different
conditions. Machine Learning (ML) holds the
promise to help reduce the effort needed to
develop models. Toward this goal, a collection
of ML algorithms, including Automatic Relevance
Determination are used. Parallel Discrete Event
System Specification (PDEVS) models are
developed for Single-stage and Two-stage cascade
factories. Each model is simulated under
different demand profiles. The simulated data
sets are partitioned into subsets, each for one
or more model components. The ML algorithms are
applied to the data sets for generating models.
The throughputs predicted by the ML models
closely match those in the PDEVS simulated data.
This study contributes to modeling by
demonstrating the potential benefits and
complications of utilizing ML for discrete-event
systems.
pdf
Validation without Data - Formalizing Stylized Facts
of Time Series
Pia Wilsdorf, Marian Zuska, Philipp Andelfinger, Florian
Peters, and Adelinde Uhrmacher (University of Rostock)
Abstract
Abstract
A stylized fact is a simplified presentation of
an empirical finding. When modeling and
simulating complex systems and real data are
sparse, stylized facts have become a key
instrument for building trust in a model as they
represent important requirements regarding the
model’s behavior. However, automatically
validating stylized facts has remained limited
as they are usually expressed in natural
language. Therefore, we develop a formal
language with a custom syntax and tailored
predicates allowing modelers to unambiguously
and succinctly describe important (temporal)
characteristics of simulation traces or
relationships between multiple traces via
statistical tests. The proposed formal language
is able to express numerous facts from the
literature in different application domains, as
well as to automatically check stylized facts.
If stylized facts are defined at the beginning
of a simulation study, formally expressing and
checking them can streamline and guide the
development of simulation models and their
successive revisions.
pdf
Track Coordinator - Professional Development: Thomas Berg (The University of Tennessee, Knoxville),
Weiwei Chen (Rutgers University)
Technical Session · Professional Development
Panel: Navigating Publication Outlets for Simulation
Research: Insights from Journal Editors
Chair: Thomas Berg (The University of Tennessee,
Knoxville)
Navigating Publication Outlets for Simulation
Research: Insights from Journal Editors
Tom Berg (The University of Tennessee, Knoxville); Jose
Blanchet (Stanford University); Christine Currie
(University of Southampton); Weiwei Chen (Rutgers
University); Peter Haas (University of Massachusetts
Amherst); Jeff Hong (Fudan University); Bruno Tuffin
(University of Rennes); and Jie Xu (George Mason
University)
Abstract
Abstract
This panel discussion is designed to provide
young scholars in the field of simulation with
valuable insights into identifying suitable
publication avenues for their research
endeavors. Senior journal editors will serve as
panelists and share their wealth of experience
and perspectives. Journals represented include
ACM TOMACS, IISE Transactions, INFORMS Journal
on Computing, Journal of Simulation, Operations
Research, and Stochastic Systems. Specifically,
the panelists will introduce preferred topics,
focuses, and future trends for each journal.
Panelists will also share their own experiences
and suggestions on the peer review process, such
as how to navigate through revisions and
rejections, and ethical policies. Young scholars
will also learn the importance of serving the
community as a reviewer, and what senior editors
expect from reviewers.
pdf
Project Management and Construction
Track Coordinator - Project Management and Construction: Jing Du (University of Florida), Joseph Louis (Oregon State
University)
Technical Session · Project Management and Construction
Health, Safety, and Sustainability in Construction
Chair: Shuai Li (the University of Tennessee)
Simulation Modeling for Sustainable Construction: A
Case Study to Highlight the Social Aspect
Mai Ghazal, Fatemeh Parvaneh, Ahmed Hammad, and Yasser
Mohamed (University of Alberta)
Abstract
Abstract
To cut costs and drive innovation in product
development, many projects have turned to remote
worksites for construction component
pre-fabrication. Fabricating pipe spools in
shops eliminates delays due to weather and
allows for better resource planning. This paper
aims to optimize labor resource usage in a pipe
spool manufacturing plant that fabricates three
different types of spools. It utilizes
historical data to implement a discrete-event
simulation model. The proposed simulation model
effectively reduced idle time and evenly
distributed the workload. As a result, the
overall fabrication time for all three spools
was reduced, leading to a 22% decrease in active
shop usage. This allowed subsequent jobs to
commence earlier, giving the team more
flexibility in meeting deadlines and addressing
labor constraints. This research provides
insights into how resource allocation plans can
be created to maximize sustainability results,
both socially (through improving working
conditions and reducing workloads) and
economically.
pdf
The Impact of Alcohol Use on Construction Safety
Outcomes: An Agent-Based Modeling Investigation
Christin Manning and Ehsan Salari (Wichita State
University)
Abstract
Abstract
Construction is a notoriously hazardous industry
and heavy alcohol use is common. This project
creates an agent-based modeling (ABM) simulation
exploring the impact of alcohol on safety
outcomes. Simulation modeling is useful in
occupational safety research because it
generates immediate results and bypasses ethical
concerns. Workers and foremen interact on a
virtual jobsite with hazards present. Positive
blood alcohol concentration (BAC) decreases
hazard awareness and reaction time, and
additionally decreases competency of foremen.
Scenarios of baseline, increased, and decreased
alcohol consumption are analyzed for changes in
near misses, injuries, and fatalities.
Additional scenarios of improved training and
engineering controls are explored also for
comparison. A decrease in alcohol consumption
led to a significant reduction in injuries by up
to 12%, and an increase had the opposite effect.
Neither scenario significantly impacted
fatalities due to fatalities' low base rate.
Safety training had a comparable impact but
improving engineering controls outweighed both.
pdf
3D Object Detection and Localization within
Healthcare Facilities
Da Hu (Kennesaw State University) and Mengjun Wang and
Shuai Li (University of Tennessee)
Abstract
Abstract
This study introduces a deep learning-based
method for indoor 3D object detection and
localization in healthcare facilities. This
method incorporates spatial and channel
attention mechanisms into the YOLOv5
architecture, ensuring a balance between
accuracy and computational efficiency. The
network achieves an AP50 of 67.6%, an mAP of
46.7%, and a real-time detection rate with an
FPS of 67. Moreover, the study proposes a novel
mechanism for estimating the 3D coordinates of
detected objects and projecting them onto 3D
maps, with an average error of 0.24 m and 0.28 m
in the x and y directions, respectively. After
being tested and validated with real-world data
from a university campus, the proposed method
shows promise for improving disinfection
efficiency in healthcare facilities by enabling
real-time object detection and localization for
robot navigation.
pdf
Technical Session · Project Management and Construction
Technological Innovations for Enhanced Construction
Operations
Chair: Shuai Li (the University of Tennessee)
Applying Civil Information Modeling and Augmented
Reality to the Construction of Underground
Pipelines
Andy Cui (Montgomery Blair High School) and Man Liang
(University of Maryland)
Abstract
Abstract
Municipal construction projects are often
challenging and risk-prone due to unexpected
underground conditions. Access to As-Built and
As-Design data is essential to avoid budget
overruns, schedule delays, and other
construction disputes. However, coordinating
field conditions with construction drawings can
be difficult and lead to discrepancies.
Traditional methods of denoting information onto
the ground by surveyors and field workers have
been limited in their ability to provide
relevant information and support scaling up.
These methods also create restrictions in data
sharing and communication among workers and
engineering teams. With the development and use
of AR technology, our study proposes an
augmented reality tool leveraging Google ARCore
to assist construction engineers in a
straightforward and efficient manner by
displaying utility information, including pipe
direction, type, slope, diameter, and material.
The campus area of the University of Maryland
College Park is used as a case study to
demonstrate our approach.
pdf
A Value Stream Mapping-Based Discrete Event
Simulation Template for Lean Off-Site Construction
Activities
Prashanth Kumar Sreram (Indian Institute of Technology
Bombay, NICMAR Hyderabad) and Albert Thomas (Indian
Institute of Technology Bombay)
Abstract
Abstract
Lean construction is a promising approach for
performance improvement in the construction
industry. Value stream mapping (VSM) is an
essential lean tool for the process improvement
of construction activities. However, VSM,
regarded as a static pen-and-paper technique,
requires repeating the VSM preparation for every
improvement alternative. Therefore, dynamism can
be introduced into VSM by developing computer
simulation models, which is the study's
objective. A VSM-based discrete event simulation
(DES) template is presented in this paper for
off-site construction activities. The model
provides a virtual testing environment for the
user to decide upon the potential time reduction
in non-value-added (NVA) activities for the
process improvement. The development and
validation of the model is done based on the
actual data from a precast production factory.
The DES-VSM simulation model assists plant
managers with the best possible NVA reduction
strategy and accelerates lean implementation in
the construction industry.
pdf
Technical Session · Project Management and Construction
Advanced Simulation Methods in Construction
Chair: Albert Thomas (Indian Institute of Technology
Bombay)
New Functions and Statements to Support Preemption in
the STROBOSCOPE Simulation System
Photios G. Ioannou (University of Michigan) and Veerasak
Likhitruangsilp (Chulalongkorn University)
Abstract
Abstract
The new preemption capabilities added to the
STROBOSCOPE simulation system are described and
illustrated by two examples. The first example
involves moving soil using two wheelbarrows and
two laborers. It investigates the conditions for
preemption to improve production by allowing the
return of an empty wheelbarrow to interrupt
loading and to start hauling a partially loaded
wheelbarrow immediately. In the second example,
two cranes unload barges bringing fill material
for undersea land reclamation. When only one
barge is available, it can unload using both
cranes. When two or more barges become
available, each barge unloads using one crane.
Unloading a barge can switch between using one
and two cranes multiple times, with the
remaining unload time either cut in half or
doubled each time. Modeling the multiple
reallocations of cranes and the required time
adjustments illustrates the new STROBOSCOPE
preemption capabilities.
pdf
Simulation of Earthmoving for a Dam Using Engineering
Calculations
Photios G. Ioannou (University of Michigan)
Abstract
Abstract
Detailed STROBOSCOPE simulations of earthmoving
for the construction of a dam use the
engineering calculations typically employed in
heavy construction to estimate equipment
performance based on the characteristics of the
haul and return roads and the mechanical
properties of actual models of heavy loaders and
trucks. Sensitivity analysis investigates the
total cost of truck combinations while
considering the traffic effects of one or two
bridges needed to cross a river along the haul
route. This example can serve as a simulation
model template to facilitate the wider
acceptance of simulation in heavy construction
practice.
pdf
Technical Session · Project Management and Construction
Strategic Modeling and Decision Making in Construction
Chair: Gabriel Castelblanco (University of Florida)
Enhancing the Public Investment in Public-Private
Partnerships Using System Dynamics Modeling
Sara Biziorek and Alberto De Marco (Politecnico di
Torino), Jose Guevara (Universidad de los Andes), and
Gabriel Castelblanco (University of Florida)
Abstract
Abstract
Public-Private Partnership (PPP) programs have
been adopted to leverage private funding for the
development of public infrastructure and
services, thereby relieving public fiscal
pressure. However, the complexity and length of
PPP contracts can lead to higher costs for the
public sector. Using data from more than 700
PPPs that integrate the UK Private Finance
Initiative and Private Finance 2 programs, this
study analyzes the long-term financial
implications of these programs using System
Dynamics. Causal-loop diagrams were developed to
illustrate the causal structures that generate
the long-term financial effects of PPPs on the
public sector. The paper offers potential
strategies to enhance the performance of PPP
programs. This study contributes to closing the
research gap identified in previous research for
more efficient PPP programs by uncovering their
dynamics and offering suitable policies for
governments to improve their outcomes.
pdf
A Discrete-Event Simulation to Explore Disaggregation
of Biotechnology Research and Development
Workflows
Susan S.M. Hanson, Noah Mecikalski, Alex Tobias, Jack
Morris, Neal Wagner, and Rebecca S. Widrick (MITRE
Corporation) and Damon Bayer (University of California
Irvine)
Abstract
Abstract
Research and development (R&D) of biotechnology
products is an iterative process typically
characterized by a monolithic workflow in which
a single organization takes a project from start
to finish through many complex operations. This
paper presents a discrete-event simulation
methodology to explore an alternative
disaggregated workflow in which R&D is managed
by a single organization but individual
operations are distributed among multiple
organizations. This methodology is applied to a
protein engineering R&D process to compare the
monolithic and disaggregated workflows over a
range of conditions and scenarios. Based upon a
set of assumed parameters, results identify
conditions favorable to either workflow and
provide a first indication that the
industry’s trend towards disaggregation
may lead to improvements in development
timelines. The methodology also provides a
foundation for decision support tools that
enable decision-makers to manage biotechnology
R&D projects.
pdf
Development of a Discrete Event Simulation Based
Framework to Evaluate Six Sigma Implementation in the
Construction Sector
Srinivas Rao Jalam (Indian Institute of Technology
Bombay ,Mumbai); Vaishnavi Thumuganti (Stanford
University); and Albert Thomas (Indian Institute of
Technology Bombay ,Mumbai)
Abstract
Abstract
Six Sigma is a useful technique adopted in the
construction industry to attain supreme quality
levels by reducing the variability in the
processes. However, rigorous field
implementation of a Six Sigma methodology takes
time, money, resources, and stakeholder
commitment. This study develops a
simulation-based framework that can mimic a Six
Sigma implementation effort in a construction
site using a discrete event simulation
technique. Such a framework helps the decision
makers to check the benefits of Six Sigma by
assessing what-if scenarios for possible system
improvement, even before expending the time and
resources needed for field implementation of Six
Sigma techniques. Therefore, through a
combination of discrete event simulation and Six
Sigma, the variations in a process at a
construction project are eliminated. The results
of this study can inspire construction managers
to use simulation to understand Six Sigma
implementation and improve the process or system
to fulfill customer needs.
pdf
Reliability Modeling and Simulation
Track Coordinator - Reliability Modeling and Simulation: Sanja Lazarova-Molnar (University of Southern Denmark,
Karlsruhe Institute of Technology), Xueping Li (University
of Tennessee), Olufemi Omitaomu (Oak Ridge National
Laboratory)
Technical Session · Reliability Modeling and Simulation
Simulation of Stochastic Models
Chair: Sophia Gunluk (Mila)
Identifying Quality Mersenne Twister Streams for
Parallel Stochastic Simulations
Benjamin Antunes, Claude Mazel, and David Hill (LIMOS)
Abstract
Abstract
The Mersenne Twister (MT) is a pseudo-random
number generator (PRNG) widely used in High
Performance Computing for parallel stochastic
simulations. We aim to assess the quality of
common parallelization techniques used to
generate large streams of MT pseudo-random
numbers. We compare three techniques: sequence
splitting, random spacing and MT indexed
sequence. The TestU01 Big Crush battery is used
to evaluate the quality of 4096 streams for each
technique on three different hardware
configurations. Surprisingly, all techniques
exhibited almost 30% of defects with no
technique showing better quality than the
others. While all 106 Big Crush tests showed
failures, the failure rate was limited to a
small number of tests (maximum of 6 tests failed
per stream, resulting in over 94% success rate).
Thanks to 33 CPU years, high-quality streams
identified are given. They can be used for
sensitive parallel simulations such as nuclear
medicine and precise high-energy physics
applications.
pdf
Simulating Justice: Simulation of Stochastic Models
for Community Bail Funds
Sophia Gunluk (Mila) and Yidan Zhang and Jamol Pender
(Cornell University)
Abstract
Abstract
Bail funds have a long history of helping those
who cannot afford bail in order to wait for
trial at home. They have also had a large impact
on the verdict of the defendant. In this paper,
we present the first stochastic model for
capturing the dynamics of a community bail fund.
Our bail fund model integrates traditional
queueing models with classic insurance/risk
models to represent the bail fund’s
intricate dynamics. We employ simulation
techniques to assess Gaussian-based
approximations that estimate the probability of
a defendant being denied access to the bail fund
when it lacks the adequate funds to support
them. Additionally, we propose a new
simulation-based algorithm that leverages a
deterministic infusion of capital as a control
variable to stabilize the probability that
defendants have access to the bail fund. Our
simulation results reveal that our
Gaussian-based approximations are suitable for
moderately and highly active bail funds.
pdf
Sensor Fusion DEVS for Angle Estimation on Inertial
Measurement Unit
Gabriel Wainer, Joseph Boi-Ukeme, and Vedant Paranjape
(Carleton University)
Abstract
Abstract
We explore the application of a Sensor Fusion
Framework, called SAFE (Simple, Applicable,
Extensible, and Flexible) to improve the
reliability of measurements obtained from
Inertial Measurement Unit (IMU) sensors. SAFE is
built using a DEVS specification and the Cadmium
tool. Measuring angular position is a difficult
task due to the unreliability of gyroscopes and
accelerometers, two sensors widely used to
measure angles. Although angular position can be
measured using imaging systems, these are
costly, and not ideal for handheld and portable
devices. An alternative solution is to use
sensor fusion to fuse the readings of both
accelerometer and gyroscope, obtaining reliable
readings. We show the application of the SAFE
methodology and the results of our case study
showing the potential of this method.
pdf
Technical Session · Reliability Modeling and Simulation
Cyber-physical Systems
Chair: Olufemi Omitaomu (Oak Ridge National Laboratory)
A Virtual Testbed for the Development and
Verification of Cyber-Physical Systems
Jan Reitz, David Böken, and Jürgen
Roßmann (RWTH Aachen University)
Abstract
Abstract
This paper presents a virtual testbed for the
development and verification of cyber-physical
systems, integrating network simulation,
physics, and hardware emulation within the
multi-domain simulation framework VEROSIM. The
testbed facilitates comprehensive
software-in-the-loop testing, enabling accurate
and reliable evaluation of control algorithms in
complex, interconnected systems. The integrated
approach simplifies simulation setup and model
management, while allowing natural treatment of
mobility and the use of sophisticated physical
radio wave propagation models. The testbed also
enables the simulation of various fault
scenarios, supporting the assessment of system
resilience and fault-tolerant strategies. A case
study involving a capsule approaching the
International Space Station demonstrates the
effectiveness of the proposed testbed in
capturing the interactions between software,
hardware, and physical elements, and verifying
the overall behavior of a cyber-physical system
under adverse conditions.
pdf
Multi-Agent Simulation Based Framework for Power
Restoration Time Estimation at Distribution
Level
Yang Chen (North Carolina Agricultural and Technical
State University), Olufemi Omitaomu (Oak Ridge National
Laboratory), Nicholas Roberts (Dewberry), and Bandana
Kar (U.S. Department of Energy)
Abstract
Abstract
The growing frequency of power outages has
prompted increased interest in developing a more
resilient power grid that can quickly recover
from weather-related damage. At the distribution
level, power restoration is a complex,
multi-stage process involving multiple response
entities. Providing utility stakeholders,
government regulators, and the public with
information about outage duration and estimated
time to restoration is crucial. The research
employs a multi-agent simulation approach, which
allows for the simulation of decision-making
behaviors among different entities and the
incorporation of various uncertainties.
Specifically, the study uses the open-source
simulation package Mesa-Geo in conjunction with
the Python language and constructs a road
network using the open-source network extension
pgRouting for routing queries. The research
design includes several experiments focused on
Florida as a case study, comparing repair crew
sizes, power outage numbers, and road damage
scenarios. The findings could offer valuable
managerial guidance on resource allocation in
the restoration process.
pdf
A Framework for Validating Data-Driven Discrete-Event
Simulation Models of Cyber-Physical Production
Systems
Jonas Friederich (University of Southern Denmark) and
Sanja Lazarova-Molnar (Karlsruhe Institute of
Technology)
Abstract
Abstract
In recent years, there has been a significant
increase in the deployment of Cyber-physical
Production Systems (CPPS) across various
industries. CPPS consist of interconnected
devices and systems that combine physical and
digital elements to enhance the efficiency,
productivity, and reliability of manufacturing
processes. Due to the continuous and fast-paced
evolution of the behavior of CPPS, there is an
increasing interest in generating data-driven
Discrete-event Simulation (DES) models of such
systems. The validation of these models,
however, remains a challenge, and traditional
approaches may be insufficient to ensure their
accuracy. To address this challenge, we propose
a framework for validating data-driven DES
models of CPPS. We emphasize the importance of
continuously monitoring the validity of
data-driven DES models and updating them when
necessary to ensure their accuracy over time.
We, furthermore, demonstrate our proposed
approach through a case study in reliability
assessment and discuss challenges and
limitations of our framework.
pdf
Track Coordinator - Scientific Applications: Rafael Mayo-García (CIEMAT), Esteban Mocskos
(University of Buenos Aires (AR), CSC-CONICET)
Technical Session · Scientific Applications
Computer Science for Simulations
Chair: Rafael Mayo-García (CIEMAT)
Strong Scaling of the SVD Algorithm for HPC Science:
A PETSc-based Approach
Paula Ferrero-Roza (Universidad de La Coruña),
José A. Moríñigo (CIEMAT), and Filippo
Terragni (UC3M)
Abstract
Abstract
The Singular Value Decomposition (SVD) algorithm
is ubiquitous in many fields of science and
technology. It may be used embedded into other
advanced algorithms, solvers or data processing
chains. In those scenarios dealing with large
data volumes expressed as a huge matrix, there
is the need of a parallel SVD version to process
it efficiently. We present some ideas and
results obtained within the PETSc framework,
which enable to design promising HPC scalable
solvers. The focused SVD implementations have
been taken from the SLEPc library, which is
seamless plugged into PETSc to extend its
capabilities. Besides, there is also a
randomized SVD and wrappers to interface
ScaLAPACK and others packages to extract
singular triplets. This work assesses the strong
scaling attained with these SVD implementations
at extracting the leading singular values of a
population of both sparse and dense matrices. A
comparison of performance is provided.
pdf
nbSimGen: Jupyter Notebook Extension for Generating
Simulation Experiments
Pia Wilsdorf, Anton Willy Kirchhübel, and Adelinde
M. Uhrmacher (University of Rostock)
Abstract
Abstract
Simulation experiments are crucial in conducting
simulation studies. With simulation studies
growing increasingly complex, simulation
experiments are intertwined with steps of
conceptual modeling, model building, analyzing
data, and visualizing and interpreting results.
Making the products of these various steps
(assumptions, requirements, data, model
components, and experiments) explicit has been
shown to increase the reproducibility of
simulation studies. Moreover, using an
integrated environment that allows developing,
organizing and documenting those products can
facilitate their automatic reuse and
exploitation. We explore Jupyter Notebook as an
all-in-one solution for conducting and
documenting a simulation study, and we present
nbSimGen. This Jupyter Notebook extension lends
support to modelers by automatically specifying
and running suitable simulation experiments. It
is based on an annotation vocabulary that,
during the development of the conceptual model
and the simulation model, allows users to mark
portions of their notebook deemed relevant to
the various simulation experiments to come.
pdf
A Facilitated Discrete Event Simulation Framework to
Support Online Studies: An Intervention in a Small
Enterprise
Milena Silva Oliveira, Carlos Henrique Santos, Gustavo
Teodoro Gabriel, Fabiano Leal, and José Arnaldo
Barra Montevechi (Federal University of Itajuba)
Abstract
Abstract
Considering some challenges that prevent the
expansion of discrete event simulation studies,
such as financial constraints to invest in the
data collection of large samples and to hire
qualified people for data analysis and for
developing complex models, this paper aims to
propose a framework to support simulation
studies where it is not widely used, adopting
facilitated modeling. Since the facilitated DES
frameworks in the literature focus on healthcare
and face-to-face meetings, the present work
offers a framework for simulation projects in
production systems, which also supports online
interventions. After its development, the
FaMoSim (Facilitated Modeling Simulation)
framework was applied in a real case to evaluate
its applicability. In the application, it was
possible to carry out a faster and more flexible
online modeling process, create a simple
computer model that does not require a complex
data collection structure nor a specialist team,
and assist the stakeholders in identifying
improvements.
pdf
Technical Session · Scientific Applications
AI-oriented Simulations
Chair: Rafael Mayo-García (CIEMAT)
Emotion Classification Through Speech Data
Analysis
Luzalen Marcos, Abdolreza Abhari, and Kristiina Mai
(Toronto Metropolitan University)
Abstract
Abstract
Good quality healthcare services require
effective communication between the patient and
the healthcare provider. This work will help
improve the areas of healthcare systems
automation and optimization by applying Speech
Emotion Recognition (SER) in health
consultations to prevent miscommunication
between patients and healthcare providers.
Crowd-Sourced Emotional Multimodal Actors
Dataset (CREMA-D) was used to compare the
performances of different machine learning
models in classifying emotions. Before feeding
the raw dataset to the models, exploratory data
analysis was done to determine features that
should be considered for future analysis. Our
results showed that depending on the emotion,
there are some syllables in the text that were
emphasized or took time to be pronounced by the
speaker. After data analysis, the dataset was
fed into different models and determined that
the Support Vector Machine (SVM) is a
machine-learning model for SER.
pdf
GPT-Based Models Meet Simulation: How to Efficiently
Use Large-Scale Pre-Trained Language Models Across
Simulation Tasks
Philippe J. Giabbanelli (Miami University)
Abstract
Abstract
The disruptive technology provided by
large-scale pre-trained language models (LLMs)
such as ChatGPT or GPT-4 has received
significant attention in several application
domains, often with an emphasis on high-level
opportunities and concerns. This paper is the
first examination regarding the use of LLMs for
scientific simulations. We focus on four
modeling and simulation tasks, each time
assessing the expected benefits and limitations
of LLMs while providing practical guidance for
modelers regarding the steps involved. The first
task is devoted to explaining the structure of a
conceptual model to promote the engagement of
participants in the modeling process. The second
task focuses on summarizing simulation outputs,
so that model users can identify a preferred
scenario. The third task seeks to broaden
accessibility to simulation platforms by
conveying the insights of simulation
visualizations via text. Finally, the last task
evokes the possibility of explaining simulation
errors and providing guidance to resolve them.
pdf
Technical Session · Scientific Applications
Multi-physics Simulations
Chair: Rafael Mayo-García (CIEMAT)
An Integrated Multi-Physics Optimization Framework
for Particle Accelerator Design
Gongxiaohui Chen, Tyler Chang, and John Power (Argonne
National Laboratory) and Chungunag Jing (Euclid Techlabs
LLC)
Abstract
Abstract
The overarching goal of beamline design is to
achieve a high brightness electron beam from the
beamline. Traditional beamline design studies
involved separate optimizations of
radio-frequency cavities, magnets, and beam
dynamics using different codes and pursuing
various intermediate objectives. In this work,
we present a novel unified global optimization
framework that integrates multiple physics
modules for beamline design as simulation
functions for a two-stage global optimization
solver.
pdf
The Cloud-Based Implementation and Standardisation of
Anthropomorphic Phantoms and their Applications
Osiris Núñez-Chongo and Manuel Carretero
(Universidad Carlos III de Madrid); Rafael
Mayo-García (Centro de Investigaciones
Energéticas, Medioambientales y Tecnológicas
(CIEMAT)); and Hernán Asorey (Comisión
Nacional de Energía Atómica, Centro
Atómico Bariloche)
Abstract
Abstract
Radiation protection applications often require
the creation of a large number of precise
simulations of radiation-human body
interactions. Our research is focused on
creating RadPhantom, a new Geant4 application
that constructs voxelized anthropomorphic
phantom models. This allows for the standardized
and reproducible generation of Geant4
simulations in cloud-based environments. We have
incorporated existing and publicly accessible
models into Meiga, a framework designed for the
integration of Geant4-based applications. To
standardize these simulations, guarantee their
reproducibility, and adhere to the FAIR
principles, we have developed an extended
vocabulary schema using metadata and ontologies
that align with current standards. By employing
virtualization containers, we capitalize on the
scalability and adaptability of public and
federated clouds. In this paper, we detail our
implementation, present some benchmarking
results and comparisons with current
methodologies, and discuss the potential
applications for evaluating doses on commercial
flights or assessing radiation shielding in
neutron production facilities.
pdf
Simulation Around the World
Track Coordinator - Simulation Around the World: Seong-Hee Kim (Georgia Institute of Technology), Theresa
Roeder (San Francisco State University), John Shortle
(George Mason University)
Technical Session · Simulation Around the World
Construction and Project Management
Chair: Gabriel Wainer (Carleton University)
DEVS Modeling and Simulation of the Loading and
Hauling Process in Open Pit Mines
Joel Santana and Alonso Inostrosa-Psijas (Universidad de
Valparaíso), Francisco Moreno (Universidad de
Santiago), Mauricio Oyarzún (Universidad Arturo
Prat), and Gabriel Wainer (Carleton University)
Abstract
Abstract
Chile is the world's leading copper producer,
with more than 5.6 million tons produced in
2020. Most of the produced ore comes from open
pit mines, whose extraction process consists of
different subprocesses, with ore hauling
incurring the highest operational cost. Tools to
improve this subprocess are of paramount
importance. Most tools use approaches that rely
on optimization based on analytical methods.
However, these fail to capture human behavior or
to consider fine-grained details. To this end,
we present a DEVS (Discrete-Event System
Specification) simulation model. The formal
definition of DEVS helps with the design and
experimentation. DEVS modular interfaces allow
users to extend the model easily to consider
more entities, mine layouts, and dispatching
policies. Simulations of the model delivered
precise results compared to the literature,
providing a valuable tool for decision-making in
the mining industry.
pdf
A Hybrid Simulation-based Optimization Framework for
Managing Modular Bridge Construction Projects: A
Cable-Stayed Bridge Case Study
Mohamed Assaf, Sena Assaf, William Correa, Rafik
Lemouchi, and Yasser Mohamed (University of Alberta)
Abstract
Abstract
Generally, bridge construction is one of the
most complex structures in the construction
industry due to the higher scalability and
supply chain complexity. The modular bridge
construction (MBC) technique is considered more
advantageous in providing higher productivity,
shorter schedules, and better quality. Current
practices in managing MBC projects overlook
dynamic behaviors among the relevant
stakeholders and the interactions among various
interacting systems, including manufacturing,
logistics, and onsite assembly. To this end,
this paper proposes a simulation-optimization
framework to enhance MBC projects planning. The
simulation module comprises discrete event
simulation and agent-based modeling to model the
interconnected behaviors of the MBC systems. The
optimization module aims to improve the key
performance indicators (KPIs) of MBC projects,
including project cost, schedule, and
sustainability. The proposed framework is
validated by introducing an MBC case of a
cable-stayed bridge. The generated solutions by
the optimization model show possible significant
enhancements in the identified KPIs.
pdf
Integrated Analysis and Simulation for Enhancing Wall
Assembly Process Efficiency by Resolving
Bottlenecks
Zeyu Mao, Alejandro Ramon Rivera, and Yasser Mohamed
(University of Alberta)
Abstract
Abstract
Unbalanced production rates of activities and
abundant resource allocation are the leading
reason behind bottlenecks in processes and have
been one of the causes that negatively affect
projects leading to wasted resources. Many
industries suffer from unbalanced resource
workloads, where manufacturing takt times at
some workstations are out of sync with preceding
stations, consequently leading to an abruption
in the workflow between activities. This
research aims to assess the current state of the
manufacturing process of a wall assembly line
from material cutting to installation,
identifying bottlenecks, and creating a
framework that would contrast both cycles to
finally propose a solution through simulation. A
case was studied to propose innovative methods
to improve the process flow and to eliminate any
waste generated by bottlenecks. This will not
only reduce the process duration but will also
significantly increase cost expenditure since
the amount of idle time and resources will be
reduced.
pdf
Technical Session · Simulation Around the World
Facilitating Business Decisions
Chair: Christos Alexopoulos (Georgia Institute of
Technology)
Impactful Simulation Models from a Brazilian
Simulation Consultancy
Wilson Pereira and Leonardo Chwif (Simulate)
Abstract
Abstract
Simulate Simulation Technology is a Brazilian
consultancy company focused on developing
discrete event simulation models and providing
simulation training. Some of the simulation
models developed over the last 20 years are
classified by us as successful and impactful,
with no relationship to their complexity,
applicability level, or purpose. This article
presents some of these models.
pdf
Using System Dynamics to Adapt Business Models to
Changing Conditions
Marisa Analia Sanchez (Universidad Nacional del Sur) and
Javier García Fronti (Universidad de Buenos Aires)
Abstract
Abstract
This paper addresses the problem of determining
organizational adaptations to ensure business
continuity. We propose a methodology to assess
the impact of disruptions on a business model
and evaluate interventions using System Dynamics
archetypes. The methodology aims to contribute
to making decision-making more effective and
efficient in an uncertain scenario.
pdf
Simulation-Based Immersive Analytics Toward Advanced
Decision Making
Gisela Belen Confalonieri, Ezequiel Pecker-Marcosig,
Esteban Lanzarotti, and Rodrigo Daniel Castro
(Departamento de Computación, FCEyN-UBA / Instituto
de Ciencias de la Computación (ICC-CONICET))
Abstract
Abstract
Managing effective visualisations for data
analysis is critical to support informed
decision making across multiple domains, which
also requires the ability to interact with the
data. This includes understanding data from
real-world scenarios enriched with simulated
virtual data, and the ability to assess the
impact of user interventions on concurrently
running simulation models. To address this, we
propose a framework that combines a DEVS
simulator with a game engine, allowing users to
interact directly with the model during
simulation runtime, while observing realistic
visualisations of the generated data and system
behaviour.
pdf
Technical Session · Simulation Around the World
Discrete-event Simulation Language and Platforms
Chair: María Julia Blas (INGAR CONICET UTN)
RustSim: A Process-Oriented Simulation Framework for
the Rust Language
Kevin Frez and Mauricio Oyarzun (Universidad Arturo
Prat), Alonso Inostrosa-Psijas (Universidad de
Valparaíso), Francisco Moreno (Universidad de
Santiago), and Gabriel Wainer (Carleton University)
Abstract
Abstract
We present RustSim, a library for discrete-event
process-oriented simulations designed and
implemented in Rust programming language. It
includes a broad set of classes to allow the
user to implement simulation processes and
process-oriented primitives. The flexible
modular design of RustSim allows users to extend
its functionality. In addition, RustSim includes
mechanisms to avoid inconsistencies when
applying state-changing primitives that other
libraries in the language's ecosystem do not
provide. We take advantage of Rust generators
(coroutine equivalent) to implement
process-oriented simulation primitives. Finally,
the library's internal process handling
structure is discussed in detail, including its
implementation, how simulations are executed,
and a case study with a highly detailed example
of its use.
pdf
Modeling and Simulating Stream Processing
Platforms
Alonso Inostrosa-Psijas (Universidad de
Valparaíso); Veronica Gil-Costa (UNSL, CONICET);
Roberto Solar and Mauricio Marin (Universidad de
Santiago de Chile); and Gabriel Wainer (Carleton
University)
Abstract
Abstract
Stream processing platforms allow processing and
analyzing real-time data. Several tools have
been developed for these platforms to guarantee
that the applications running on them are
scalable, fast, and fault-tolerant and that they
can be deployed on many processors. However,
determining the proper number of processors
suitable to hold a given stream processing-based
software application is challenging, especially
if the application is intended to serve a large
user community. In this paper, we propose to
model and simulate stream processing platforms
for performance evaluation purposes. In our case
study, we simulated a commonly used application
for the analysis of Twitter streams with Storm.
We evaluate its performance under different
workloads. Our simulator supports profiling to
measure various aspects of the application's
performance. Results show that the simulator can
replicate the metrics reported by the
application running on a real platform with
minimal error.
pdf
Using a Software Design Pattern for Redesign Routed
DEVS Formalism
Mateo Toniolo, María Julia Blas, and Silvio Gonnet
(Universidad Tecnológica Nacional - Facultad
Regional Santa Fe)
Abstract
Abstract
Routed DEVS (RDEVS) models improve traditional
discrete-event models by enhancing the
development of routing processes over predefined
behaviors. In this paper, we demonstrate how a
Software Engineering design pattern,
specifically the Decorator pattern, was applied
to the RDEVS formalism design to include event
tracking into the models without altering their
expected behavior. As a result, we provide a
solution that allows getting structured data
from RDEVS models at execution time.
pdf
Technical Session · Simulation Around the World
Agent-based and Healthcare Applications
Chair: Alonso Inostrosa Psijas (Universidad de
Valparaíso)
Using a Hybrid ABMS to Study the Propagation of
Vector-Borne Diseases in an Urban Area with
Heterogenous Geospatial Conditions
Paula Escudero, Mariajose Franco, María Sofía
Uribe, Susana Álvarez, and Rafael Mateus
(Universidad EAFIT)
Abstract
Abstract
Agent-Based Modeling and Simulation (ABMS) is a
valuable tool for understanding infectious
disease propagation. This study presents a
hybrid ABMS approach to explore the transmission
dynamics of vector-borne diseases (Dengue, Zika,
and Chikungunya) in Bello, Colombia,
incorporating geospatial characteristics. The
model was developed with specific assumptions to
validate its alignment with theoretical
behavior. Our results demonstrate the
temperature’s significant impact on
disease spread. Particularly, Chikungunya
exhibits distinct behavior compared to Dengue
and Zika. While major infection peaks occur
early in the simulation, subsequent spread
diminishes due to the absence of reinfection
considerations. This research represents an
early stage of a larger project, laying the
groundwork for future research to address
computational challenges, enabling statistical
analysis with multiple runs, and enhancing the
model’s realism with seasonal temperature
variations and geographical distributions. These
findings will provide valuable insights for
policymakers and disease control strategies in
Colombia.
pdf
Agent-Based Model for Analysis of Cervical Cancer
Detection
Juan F. Galindo Jaramillo (University of Campinas,
Hermínio Ometto Foundation) and Leonardo Grando,
José Roberto Emiliano Leite, Diama Bhadra Vale, and
Edson Ursini (University of Campinas)
Abstract
Abstract
Using Agent-Based Models (ABM) for disease
incidence may help decision-making processes.
This work shows an ABM for cervical cancer
detection. Our results show the relevance of
social indicators.
pdf
Coordination of Hospital Parking and Transportation
Services: A Simulation-based Approach
Tomer Schmid, Dror Neustatel, and Noa Zychlinski
(Technion–Israel Institute of Technology)
Abstract
Abstract
Motivated by hospital parking problems that
limit the access of patients and visitors, we
study a hospital parking setting comprising an
on-site parking lot with an occupancy-based
dynamic tariff and a free shuttle service from
an off-site free parking lot. We developed a
discrete event simulation model to study the
system’s dynamics and find the preferable
coordinated tariff and shuttle schedule that
maximize revenue for the contractor operating
the hospital’s parking services under a
predefined service level. We use a case study
from Hadassah Medical Center in Ein Kerem,
Jerusalem, to demonstrate the effectiveness of
our method. Our results show that the
coordinated solution provides significantly
better performance: more than a 30% increase in
service level, a 25% (about $5,000) increase in
daily revenue, and a 53% decrease in average
waiting time for a shuttle.
pdf
Technical Session · Simulation Around the World
Simulation Applications in Africa
Chair: Simon J. E. Taylor (Brunel University London)
Weather Prediction Simulations for East Africa
Julianne Sansa-Otim (Makerere University), Isaac Mugume
(Uganda National Meteorological Authority), and Mary
Nsabagwa (Makerere University)
Abstract
Abstract
Numerical weather prediction (NWP) contributes
significantly in the production of appropriate
weather forecasts. These critical capabilities
were still largely lacking in East Africa in the
early 2010s and were recently established under
the auspices of the WIMEA-ICT Project. The
project introduced the use of the Weather
Research and Forecasting (WRF) model in the
region. This model was adopted by the National
Hydro-meteorological Agencies and is largely
being used as guidance in the operations.
However, due to advances in technology, there is
a need to build capacity in NWP data
assimilation as well as Machine Learning to
further improve the accuracy of weather and
climatic predictions. Additional crop weather
modelling studies will further inform
agricultural productivity enhancement in the
region.
pdf
Challenges of Using Simulation for Healthcare
Operations Management in Developing Countries: The
Case of Ethiopia
Tesfamariam M. Abuhay (University of Gondar, Queen's
University); Mihret Woldesemayat Tereda, Lomi Eyachew
Adane, and Malefia Demilie Melesse (University of
Gondar); Stewart Robinson (Newcastle University); and
Vedat Verter (Queen's University)
Abstract
Abstract
Simulation models have been employed in
developed countries for healthcare service
operations management. However, leveraging
simulation in developing countries is limited
because healthcare operations management
challenges are quite different due to scarcity
of resources, high population numbers, high
healthcare demand, and poor planning,
implementation, monitoring and evaluation. This
study, hence, aims to investigate the usage and
adoption of simulation for healthcare operations
management in developing countries and the
challenges of using simulation in this context
by studying the case of Ethiopia through a
systematic literature review and survey.
pdf
Hybrid Approaches for Handling Mobile Crane Location
Problems in Construction Sites
Khaoula Boutouhami, Rafik Lemouchi, and Mohamed Assaf
(University of Alberta); Ahmed Bouferguene (university
of alberta); Mohamed Al-Hussein (University of Alberta);
and Joe Kosa (NCSG Crane and Heavy Haul Services)
Abstract
Abstract
Mobile crane location (MCL) in modular
construction is a complex problem that affects
both construction safety and efficiency.
Sub-optimal MCL planning increases the number of
crane relocations and the overall project cost.
Interestingly, recently, research on crane
operation planning and analysis focused on
determining crane configurations, boom lengths,
and radii to enable lifting given a crane
location. However, with a large number of
feasible locations, finding the best solution
becomes a harder task. In this respect, finding
a single crane location ensures an optimal lift
plan, e.g., minimizing the number of
pick-location. As a result, this paper aims to
bridge this gap by providing a hybrid approach
using heuristics, grid-based, and combinatorial
optimization algorithms to find the least
required lifting points. The proposed approach
is tested on a case study of a modular building.
The study contributes by minimizing the number
of crane relocations to enhance budget and cost
planning.
pdf
Technical Session · Simulation Around the World
Decision Making with Discrete-event Simulation I
Chair: Stewart Robinson (Newcastle University)
Modeling and Simulation for Farming Drone Battery
Recharging
Leonardo Grando (University of Campinas); Juan F.
Galindo Jaramillo (University of Campinas, Herminio
Ometto Foundation); and José Roberto Emiliano Leite
and Edson Luiz Ursini (University of Campinas)
Abstract
Abstract
The Connected Farm is composed of several
elements that communicate with each other
through a 4G/5G Radio Base Station (RBS) placed
in the middle of the farm. This RBS is connected
to the Internet, allowing communication for all
kinds of autonomous devices, performing
uninterrupted tasks. This work simulates the
Connected Farm environment for an autonomous
drone. Our model intends to define when each
drone needs to recharge its batteries, with no
collusion regarding this recharging decision,
reducing the drone's battery usage due to the
absence of this communication.
pdf
Simulating the Social Influence in Transport Mode
Choices
Kathleen Salazar-Serna (Pontificia Universidad
Javeriana, Universidad Nacional de Colombia); Lynnette
Hui Xian Ng (Carnegie Mellon University); Lorena Cadavid
and Carlos Jaime Franco (Universidad Nacional de
Colombia); and Kathleen M. Carley (Carnegie Mellon
University)
Abstract
Abstract
Agent-based simulations have been used in
modeling transportation systems for traffic
management and passenger flows. In this work, we
hope to shed light on the complex factors that
influence transportation mode decisions within
developing countries, using Colombia as a case
study. We model an ecosystem of human agents
that decide at each time step on the mode of
transportation they would take to work. Their
decision is based on a combination of their
personal satisfaction with the journey they had
just taken, which is evaluated across a personal
vector of needs, the information they
crowdsource from their prevailing social
network, and their personal uncertainty about
the discomfort of trying a new transport
solution. We simulate different network
structures to analyze the social influence for
different decision-makers. We find that in
low/medium connected groups inquisitive people
actively change modes cyclically over the years
while imitators cluster rapidly and change less
frequently.
pdf
Technical Session · Simulation Around the World
Decision Making with Discrete-event Simulation II
Chair: Cristina Ruiz-Martín (Carleton University)
A Simulation-Optimization Approach for Designing
Resilient Hyperconnected Physical Internet Supply
Chains
Rafael D. Tordecilla, Jairo R. Montoya-Torres, and
William J. Guerrero (Universidad de La Sabana)
Abstract
Abstract
The Physical Internet (PI) is a recent paradigm
in the supply chain management that proposes a
framework in which standardization and
optimization are key factors to raise supply
chain efficiency, resilience, and
sustainability. Strategic decisions are included
in the PI, including the supply chain network
design (SCND). In fact, structuring a (near)
optimal design is essential to achieve the PI
objectives. Additionally, disruptive events such
as the COVID-19 pandemic, earthquakes, or
terrorist attacks threaten the supply chains.
These events are difficult to predict, but their
effects can be simulated when addressing this
problem. Hence, we propose a
simulation-optimization approach that hybridizes
a multi-objective multi-period mixed-integer
program with discrete-event simulation to
optimize both cost and resilience in the SCND.
Furthermore, a network hyperconnection strategy
is tested. Results show that both resilience and
risk are improved after hyperconnecting the
supply chain, especially when active edges are
disturbed, but incur higher costs.
pdf
Formal Modeling and Simulation of Economic Complexity
Networks with Emergent Behavior-DEVS
Tobias Carreira Munich and Rodrigo Castro (Departamento
de Computación, FCEyN-UBA / Instituto de Ciencias
de la Computación (ICC-CONICET))
Abstract
Abstract
We present an application of the EB-DEVS
modelling framework for agent-based complex
adaptive systems to a systematic study of the
international Product Space network in the field
of Economic Complexity. The evolution of the
production structure of agents (countries)
becomes mutually determined by an emerging
macroscopic network (resulting from the
worldwide trade). This framework allows to make
prospective analysis about the productive
structure of countries.
pdf
Predicting Job Waiting Times in a Stochastic
Scheduling Environment Using Simulation and Regression
Machine Learning Models
Ivan Kristianto Singgih (University of Surabaya, The
Indonesian Researcher Association in South Korea) and
Stefanus Soegiharto (University of Surabaya)
Abstract
Abstract
Scheduling real systems is complicated because
of the consideration of various working
conditions. Although various combinatorial
optimization methods, ranging from mathematical
models, heuristics, metaheuristics, etc., have
been developed, these methods could require a
long computational time due to the complexity of
the problems. This study proposes a framework to
understand the system’s behavior using
regression machine learning techniques. The
considered system could be any type, e.g., the
flow shop, job shop, and their variants, with a
certain scheduling method. The framework
consists of (1) the development of the
simulation for generating the data and (2) how
the data could be used for training the
regression machine learning models. An example
of the stochastic single-machine problem with
the First-In-First-Out rule is considered. The
framework could be used to simplify the process
of understanding the system’s behavior
without any necessity to solve the optimization
problem, which could be time-consuming.
pdf
Technical Session · Simulation Around the World
Post-disaster Relief
Chair: Enver Yucesan (INSEAD)
An Agent-Based Modeling to Simulate the Dynamics of
First Responders and Evacuees in Post-Disaster
Scenarios
Amirreza Pashapour and F. Sibel Salman (Koc University),
Sridhar R. Tayur (Carnegie Mellon University), and
Barış Yıldız (Koc University)
Abstract
Abstract
In the aftermath of a sudden catastrophe, First
Responders (FR) strive to promptly reach and
rescue victims. Simultaneously, individuals take
roads to evacuate the affected region, access
medical facilities or shelters, and reunite with
their relatives. The escalated traffic
congestion significantly hinders critical FR
operations. In this study, we construct an
Agent-Based Simulation (ABS) model that extends
the existing models by incorporating FR agents,
their allocated road map, and their interaction
with evacuees in the network. Our model
investigates individuals' evacuation times as
well as FRs' rescue operation performance,
provided that a subset of road segments are
reserved for the explicit use of FRs. The
decision-maker can allocate these segments
manually within the simulation interface.
Subsequently, the consequences are discovered
through the earthquake scenario outputs of the
ABS model, casting light on its real-world
impact.
pdf
Optimization of Battery Allocation for
Post-Earthquake Damage Assessment Using Drones
Selver Tugba Yaldiz (Marmara University) and Elvin Coban
(Ozyegin University)
Abstract
Abstract
Earthquakes are one of the most common natural
disasters and assessing the hazard levels of the
affected regions and planning post-disaster
operations, including search and rescue
operations, are very critical. As the roads can
be blocked due to an earthquake and debris
removal may take time preventing critical rescue
operations from starting, drone utilization has
been increasing. Since the drones fly, it will
be easier to assess the damage levels. However,
drones have a major drawback, their batteries.
In this study, we propose a scenario-based
mathematical model to allocate a limited of
batteries before the earthquake while computing
the drones’ paths for each scenario
maximizing the total expected priority scores.
Our preliminary analysis shows that small
instances can be solved very efficiently.
pdf
Simulation and Artificial Intelligence
Track Coordinator - Simulation and Artificial Intelligence: Edward Y. Hua (MITRE Corporation), Yijie Peng (Peking
University), Simon J. E. Taylor (Brunel University
London)
Technical Session · Simulation and Artificial Intelligence
Simulation Methodologies
Chair: Yifan Lin (Georgia Institute of Technology)
Generating Population Synthesis Using a Diffusion
Model
Jaewoong Kang, Young Kim, Muhammad Mu’az Imran,
Gi-sun Jung, and Yun Bae Kim (Sungkyunkwan University)
Abstract
Abstract
Owing to the increase in computing power,
large-scale agent-based modeling (ABM) has been
increasingly used in various fields. However, a
complete and detailed individual population is
challenging to obtain because of confidentiality
concerns. Thus, modelers must adopt population
synthesis to emulate the joint distribution of
individual-level attributes of the actual
population in the region of interest.
Traditional population synthesis methods often
exhibit issues regarding scalability and
sampling zero. Therefore, this paper presents
the use of a deep generative model called the
denoising diffusion probabilistic model to
generate new samples. Our proposed method uses
the characteristics of deep generative model of
generation from noise to generate a synthetic
population, including sampling zero. In the
experimental results, the standardized root mean
squared error of our proposed model performed
2.130, which outperformed 2.381 of the deep
learning-based population synthesis method, VAE,
and 7.620 of the traditional population
synthesis method, MCMC.
pdf
Quantum Embedding Framework of Industrial Data for
Quantum Deep Learning
Hyunsoo Lee (Kumoh National Institute of Technology) and
Amarnath Banerjee (Texas A&M University)
Abstract
Abstract
Quantum computing is a contemporary engineering
discipline that innovatively overcomes
computational burdens. This study applies
quantum computing techniques to data analyses
with input data issues. When a dataset has
insufficient attributes and uncertainties,
quantum embedding techniques contribute to the
dimensional expansion of input vectors and the
quantification of uncertainties. The converted
qubits are linked to subsequent deep learning
modules, and this architecture is used for
accurate data analysis. This study proposes a
quantum embedding technique and a corresponding
quantum neural network (QNN) to better
understand these processes. In this QNN
architecture, input data are converted into
corresponding qubits, which are transformed with
quantum phase-operating modules. The quantum
features pass through subsequent deep learning
layers for more accurate data analyses. To
demonstrate the effectiveness of the proposed
model, a process model and relevant analyses are
presented and compared with existing deep
learning methods.
pdf
Simulation of a Novel, Low Swap, Sparse
Hyper-Dimensional Neural Network Architecture for
Anomaly Detection AI at the Edge
Dean C. Mumme (RAM Laboratories, Inc.) and Ksenia Burova
(RAM Laboratories, Inc)
Abstract
Abstract
This paper details the simulation and
performance results of a Sparse
Hyper-Distributed Robust Efficient Neural
Network (SpHyRE-Net) architecture that performs
anomaly detection for real-world time-series
data. SpHyRE-Net is an innovative, novel, low
size, weight and power (SWaP) machine learning
solution for devices operating at the tactical
edge. It utilizes bit operations and sparse
hyper-dimensional representations for
bio-inspired learning via a Hebbian-like rule
that results in a combined power-latency
reduction of 2-orders of magnitude over ordinary
deep networks. The paper details the application
of SpHyRE-Net to real-world cell-traffic
datasets as well as simulation requirements to
minimize latency and memory use. Also discussed
are the mechanisms necessary for implementing
the architecture on an FPGA as a precursor to
realization on a neuro-morphic ASIC with
ultra-low power profile.
pdf
Technical Session · Simulation and Artificial Intelligence
Applications in Energy, Climate, and Finance
Chair: Dean Mumme (RAM Laboratories, Inc.)
A Conversational Human-Computer Interface for Smart
Energy System Simulation Environments
Gabriel Dengler (FAU Erlangen-Nuremberg, Laboratory of
Computer Networks and Communication Systems); Pooia
Lalbakhsh (Monash University); Peter Bazan (FAU
Erlangen-Nuremberg, Laboratory of Computer Networks and
Communication Systems); Ariel Liebmann (Monash
University); and Reinhard German (FAU
Erlangen-Nuremberg, Laboratory of Computer Networks and
Communication Systems)
Abstract
Abstract
This paper introduces a conversational framework
that enhances the usability of smart energy
system simulations. This study is centered
around OpenAI's Generative Pre-trained
Transformer (GPT), a fine-tuned conversational
model that allows users to communicate with the
system in a natural way. Therefore, users can
describe their simulation scenarios in plain
language and GPT seamlessly translates these
descriptions into Python scripts, used as inputs
to the simulation environment, in our case,
AnyLogic Simulation Software. Our framework is
based on the i7-AnyEnergy core framework to
compute distribution flows and relevant
statistics. The proposed human-machine interface
facilitates and accelerates simulation modeling,
as demonstrated through the two scenarios we
have provided in this paper. Overall, our
conversational framework has the potential to
significantly improve the user experience of
smart energy system simulation environments. By
simplifying the interaction between users and
complex simulation models, we enable users to
obtain valuable insights rapidly and more
easily.
pdf
A Machine Learning Framework to Explain Complex
Geospatial Simulations: A Climate Change Case
Study
Tanvir Ferdousi (University of Virginia); Mingliang Liu,
Kirti Rajagopalan, and Jennifer Adam (Washington State
University); and Abhijin Adiga, Mandy Wilson, S. S.
Ravi, Anil Vullikanti, Madhav Marathe, and Samarth
Swarup (University of Virginia)
Abstract
Abstract
The explainability of large and complex
simulation models is an open problem. We present
a framework to analyze such models by processing
multidimensional data through a pipeline of
target variable computation, clustering,
supervised classification, and feature
importance analysis. As a use case, the
well-known large-scale hydrology and crop
systems simulator VIC-CropSyst is utilized to
evaluate how climate change may affect water
availability in Washington, United States. We
study how snowmelt varies with climate variables
(temperature, precipitation) to identify
different response characteristics. Based on
these characteristics, spatial units are
clustered into six distinct classes. A random
forest classifier is used with Shapley values to
rank static soil and land parameters that help
detect each class. The results also include an
analysis of risk across different classes to
identify areas vulnerable to climate change.
This paper demonstrates the usefulness of the
proposed framework in providing explainability
for large and complex simulations.
pdf
Cutting through the Noise: Machine Learning Proxies
for High Dimensional Nested Simulation
Xintong Li, Ben Mingbin Feng, and Tony Wirjanto
(University of Waterloo)
Abstract
Abstract
Deep learning models have gained great success
in many applications, but their adoption in
financial and actuarial applications have been
received by regulators with some treprdation.
The lack of transparency and interpretability of
these models leads to skepticism about their
resilience and reliability, which are important
factors to ensure financial stability and
insurance benefit fulfillment. In this study, we
use stochastic simulation as a data generator to
examine deep learning models under controlled
settings. Our study shows interesting findings
in fundamental questions like "What do deep
learning models learn from noisy data?'' and
"How well do they learn from noisy data?''.
Based on our findings, we propose an efficient
nested simulation procedure that uses deep
learning models as proxies to estimate tail risk
measures of hedging errors for variable
annuities. The proposed procedure uses deep
learning models to concentrate simulation budget
on tail scenarios while maintaining transparency
in the estimation.
pdf
Technical Session · Simulation and Artificial Intelligence
Reinforcement Learning
Chair: Gabriel Dengler (FAU Erlangen-Nuremberg, Laboratory
of Computer Networks and Communication Systems)
Reinforcement Learning with an Abrupt Model
Change
Wuxia Chen and Taposh Banerjee (University of
Pittsburgh) and Jemin George and Carl Busart (US Army
Research Lab)
Abstract
Abstract
The problem of reinforcement learning is
considered where the environment or the model
undergoes a change. An algorithm is proposed
that an agent can apply in such a problem to
achieve the optimal long-time discounted reward.
The algorithm is model-free and learns the
optimal policy by interacting with the
environment. It is shown that the proposed
algorithm has strong optimality properties. The
effectiveness of the algorithm is also
demonstrated using simulation results. The
proposed algorithm exploits a fundamental
reward-detection trade-off present in these
problems and uses an algorithm for the quickest
detection of the model change. Recommendations
are provided for faster detection of model
changes and for smart initialization strategies.
pdf
Dynamic Scheduling of Gantry Robots using Simulation
and Reinforcement Learning
Horst Zisgen and Robert Miltenberger (Hochschule
Darmstadt) and Markus Hochhaus and Niklas Stöhr
(SimPlan AG)
Abstract
Abstract
Industry 4.0 induces an increasing demand of
autonomous interaction between the units of
production facilities, like work centers and
transportation equipment. This has an impact on
the requirements for production scheduling and
control algorithms. These must be capable to
adapt autonomously to changes on the shop floor.
This paper presents a combination of
Reinforcement Learning and discrete event
simulation for controlling a flexible flow shop
using a gantry robot system as transportation
unit. In a gantry robot system parts are
transported by carriages fitted with grippers
that travel along rails from machine to machine.
The presented agent learns autonomously the
right control policy to move the carriages. It
is shown that in cases the optimal policy can be
determined the Reinforcement Learning based
policy is optimal and in other cases the
achieved throughput does slightly exceed the
throughput gained by a heuristic priority rule
for controlling the gantry robot.
pdf
Learning Environment for the Air Domain (LEAD)
Andreas Strand, Patrick R Gorton, Martin Asprusten, and
Karsten Brathen (FFI)
Abstract
Abstract
A substantial part of fighter pilot training is
simulation-based and involves computer-generated
forces controlled by predefined behavior models.
The behavior models are typically manually
created by eliciting knowledge from experienced
pilots, which is a time-consuming process.
Despite the work put in, the behavior models are
often unsatisfactory due to their predictable
nature and lack of adaptivity, forcing
instructors to spend time manually monitoring
and controlling them. Reinforcement and
imitation learning pose as alternatives to
handcrafted models. This paper presents the
Learning Environment for the Air Domain (LEAD),
a system for creating and integrating
intelligent air combat behavior in military
simulations. By incorporating the popular
programming library and interface Gymnasium,
LEAD allows users to apply readily available
machine learning algorithms. Additionally, LEAD
can communicate with third-party simulation
software through distributed simulation
protocols, which allows behavior models to be
learned and employed using simulation systems of
different fidelities.
pdf
Technical Session · Simulation and Artificial Intelligence
Artificial Intelligence in Manufacturing Applications
Chair: Andreas Strand (FFI)
Dispatching in Real Frontend Fabs with Industrial
Grade Discrete-Event Simulations by Deep Reinforcement
Learning with Evolution Strategies
Patrick Stöckermann, Alessandro Immordino, and
Thomas Altenmüller (Infineon Technologies AG);
Georg Seidel (Infineon Technologies Austria); Martin
Gebser and Pierre Tassel (University of Klagenfurt); and
Chew Wye Chan and Feifei Zhang (D-SIMLAB Technologies
Pte Ltd)
Abstract
Abstract
Scheduling is a fundamental task in each
production facility with implications on the
overall efficiency of the facility. While
classic job-shop scheduling problems become
intractable when the number of machines and jobs
increase, the problem gets even more complex in
the context of semiconductor manufacturing,
where flexible production control and stochastic
event handling are required. In this paper, we
propose a Deep Reinforcement Learning approach
for lot dispatching to minimize the Flow Factor
(FF) of a digital twin of a real-world,
stochastic, large-scale semiconductor
manufacturing facility. We present the first
application of Reinforcement Learning to an
industrial grade semiconductor manufacturing
scenario of that size. Our approach leverages
self-attention mechanisms to learn an effective
dispatching policy for the manufacturing
facility and is able to reduce the global FF of
the fab.
pdf
Managing Bottlenecks in Systems with Product
Recovery
Leila Talebi and Lin Guo (South Dakota School of Mines &
Technology)
Abstract
Abstract
Effectively managing products at the end of
their lifecycle is increasingly crucial as
numerous systems adopt recovery strategies.
However, many are limited to remanufacturing or
recycling as the only recovery option.
Effectively handling end-of-life products
demands diverse approaches, including
refurbishing and cannibalization. Sustainable
recovery centers and manufacturers encounter
challenges linked to uncertainties about the
quantity and condition of returned products,
which can disrupt operations and lead to
bottlenecks. Our solution employs machine
learning, specifically a CNN-LSTM model that
combines Convolutional Neural Networks (CNN) and
Long Short-Term Memory (LSTM), for predicting
return product quantity and quality.
Additionally, we utilize scenario-based
simulations to proactively pre-identify and
address bottlenecks within a short timeframe,
especially within systems managing multiple
recovery options or dealing with complex and
hazardous materials.
pdf
Simulation-Based Optimization for Enhanced CCS
Schematic Arrangement Design
SookYoung Son (Seoul National University, HDKSOE);
HyeonGoo Pyeon (HDKSOE); Jihee Kim (HDHHI); and Jong Hun
Woo (Seoul National University, Research Institute of
Marine Systems Engineering)
Abstract
Abstract
An LNG cargo tank, referred to as the Cargo
Containment System(CCS), encompasses several
barriers intended for the storage of LNG at
extremely low temperatures. In the case of the
membrane-type CCS, each barrier is composed of
insulation panels and membrane sheets. The CCS
schematic arrangement endeavors to minimize the
number of panels and sheets to enhance the
manufacturing productivity. In this study, a
combinatorial optimization approach is adopted
to obtain the optimal CCS schematic arrangement.
Then, a simulation environment is established to
assess the arrangement results under diverse
design conditions. By comparing the actual CCS
design with the results of the proposed
arrangement, the effectiveness of the proposed
approach is valiated.
pdf
Technical Session · Simulation and Artificial Intelligence
Artificial Intelligence and Optimization
Chair: Patrick Stöckermann (Infineon Technologies AG,
University of Klagenfurt)
Ensemble-Based Infill Search Simulation Optimization
Framework
José Arnaldo Barra Montevechi, João Victor
Soares do Amaral, Rafael de Carvalho Miranda, and Carlos
Henrique dos Santos (Federal University of Itajubá)
and Flávio de Oliveira Brito and Michael E. F. H.
S. Machado (FlexSim Brazil, Inc.)
Abstract
Abstract
Simulation is widely used in several areas of
knowledge, from engineering to biology,
including physics and finance. It allows the
evaluation of the model’s results under
different conditions, enabling performance
analysis and more assertive decision-making.
However, simulation can be computationally
intensive, especially when we consider complex
models. To deal with this problem, metamodeling
has been increasingly used as a simulation
optimization technique. In this article, we
propose a new adaptive metamodeling method for
simulation optimization, which aims to achieve
better results using fewer experiments. This
method combines machine learning and
metaheuristic techniques, allowing the
identification of the most important regions of
the search space, which can be explored more
efficiently to obtain optimal solutions. The
results achieved in a manufacturing problem show
that the proposed method presents a significant
improvement in the achieved objective function
value, in comparison with the conventional
benchmark method, without compromising the
simulation execution time.
pdf
Reusing Historical Observations in Natural Policy
Gradient
Yifan Lin and Enlu Zhou (Georgia Institute of
Technology)
Abstract
Abstract
Reinforcement learning provides a mathematical
framework for learning-based control, whose
success largely depends on the amount of data it
can utilize. The efficient utilization of
historical samples obtained from previous
iterations is essential for expediting policy
optimization. Empirical evidence has shown that
offline variants of policy gradient methods
based on importance sampling work well. However,
existing literature often neglect the
interdependence between observations from
different iterations, and the good empirical
performance lacks a rigorous theoretical
justification. In this paper, we study an
offline variant of the natural policy gradient
method with reusing historical observations. We
show that the biases of the proposed estimators
of Fisher information matrix and gradient are
asymptotically negligible and reduce the
conditional variance of the gradient estimator.
The proposed algorithm and convergence analysis
could be further applied to popular policy
optimization algorithms such as trust region
policy optimization. Our theoretical results are
verified on classical benchmarks.
pdf
Simulation as Digital Twin
Track Coordinator - Simulation as Digital Twin: Andrea Matta (Via La Masa 1, Politecnico di Milano), Jie Xu
(George Mason University)
Technical Session · Simulation as Digital Twin
Human Systems and Digital Twins
Chair: Jie Xu (George Mason University)
Leveraging Digital Twins to Support a Sustained Human
Presence on the Lunar Surface
Edward Hua and Linda Boan (The MITRE Corporation)
Abstract
Abstract
Having a sustained human presence on the lunar
surface is a central objective of the Artemis
Program, as it represents a key pre-requisite in
resource mining operations on the Moon as well
as an important steppingstone for future Martian
exploration and colonization. Despite its
importance, this endeavor has little precedent
to rely on to inform the many challenges it
needs to address. Digital Twin (DT), in recent
years, has been employed in a wide range of
applications. In this paper, we explore its
usefulness in establishing the Artemis Base
Camp. DT can be applied to various stages of the
lifecycle of the lunar base development. We also
identify several open questions that need be
addressed before the digital twin can be
utilized effectively in this project. In fact,
addressing these questions could facilitate
deploying DTs in use cases in a wider spectrum
of industries and sectors.
pdf
A General Framework for Human-in-the-loop Cognitive
Digital Twins
Parisa Niloofar (University of Southern Denmark); Sanja
Lazarova-Molnar (Institute AIFB, Karlsruhe Institute of
Technology); Olufemi A. Omitaomu and Haowen Xu (Oak
Ridge National Laboratory); and Xueping Li (University
of Tennessee)
Abstract
Abstract
Modelling and analysis of systems that are
equipped with sensors and connected to the
Internet are becoming more automated and less
human-dependent. However, bringing expert
knowledge into the loop along with data obtained
from Internet of Thing (IoT) devices minimizes
the risk of making poor and unexplainable
decisions and helps to assess the impact of
different strategies before applying them in
reality. While Digital Twins are more of a
data-driven simulation of the physical system,
Cognitive Digital Twins bring the human
dimension into the modelling and simulation. In
this paper, we aim to emphasize the crucial role
of explainability and the underlying rationale
behind automated or interactive decision-making
processes. Furthermore, we propose an initial
framework that delineates the specific points
within the feedback loop of a cognitive digital
twin where human involvement can be
incorporated.
pdf
A Behavior Simulation-Based Approach to Improve
Retail Performance: A Comprehensive Framework
Siddhartha Sarkar, Suman Kumar, and Vivek Balaraman
(Tata Consultancy Services Ltd)
Abstract
Abstract
The retail industry is undergoing a profound
transformation, driven by technological
advancements including AI and evolving consumer
behaviors. However, what retail decision making
lacks at present is knowledge of and integration
of ways to factor in customer behavioral drivers
in purchase decisions. We show how this can be
done through a four-step approach that will
create a behavior simulation model for retail
use cases. We use a real world problem as a
guiding example to explain our approach. Our
approach enables retailers to use behavioral
drivers to nudge customers and better
explainability of the decisions.
pdf
Technical Session · Simulation as Digital Twin
Applications of Digital Twins
Chair: Giovanni Lugaresi (CentraleSupelec, Politecnico di
Milano)
Designing a Digital Twin Prototype for Improving
Vaccination Centers' Daily Operations
Mohamed Ali Wafdi, Yasmina Maïzi, and Ygal Bendavid
(ESG UQAM)
Abstract
Abstract
In this research paper, we propose a digital
twin prototype to improve mass vaccination
centers in the Montreal region. This research is
important because is it always challenging to
define an optimal layout/capacity for healthcare
operations, especially in an emergency mode
(e.g., pandemic mode). Indeed, in such stressful
situations, all managers are more concerned
about the effectiveness of daily operations,
regardless of their efficiency. Following a
"design science" research approach, we developed
(i) an IoT prototype for real-time patient
tracking, (ii) a simulation model, and (iii)
integrated them to build our digital twin
prototype. Our institution's IoT lab was used as
a testbed research environment for developing
the IoT infrastructure and simulating the
vaccination center. While the prototype was
developed for vaccination centers, the approach
can be used in any other multi-patient/multi
flow operational environment where real-time
visibility and simulation are required
pdf
Utilizing Simulation to Evalute the Design of a
Greenfield Multi-story Parking Structure and Impacts
to Surrounding Areas
Lourdes Murphy (National Institutes of Health (NIH)) and
Yusuke Legard (MOSIMTEC)
Abstract
Abstract
The National Institutes of Health (NIH) main
campus in Bethesda, Maryland currently contains
30 parking structures. On any given day, 12,000
vehicles enter the campus. NIH is planning for
the south side of the campus to become the main
parking areas for employees and visitors.
Central to this vision is replacing a surface
lot, which contains 241 parking spaces, with the
construction of a greenfield six story parking
structure that has a planned capacity of 1420
parking spaces. NIH wanted to prioritize the
employee experience and emphasize the safety of
pedestrians and vehicles. MOSIMTEC utilized
simulation modeling to provide NIH with insight
on the impact of various entrance and exit
combinations into the parking structure. This
presentation will further describe the project,
the system being modeled, the inputs and outputs
of the simulation tool and the outcome upon the
design of the greenfield parking structure.
pdf
Increasing Efficiency of Fresh Meal Production Using
Simulation
Kean Dequeant and Daniel Paddon (Gousto) and Stephane
Dauzère-Pérès and Claude Yugma (Mines
Saint-Étienne, Univ Clermont Auvergne)
Abstract
Abstract
The pandemic period has witnessed a rapid growth
of online delivery services in various sectors,
especially in the domain of fresh produce
e-commerce. Gousto, for instance, provides a
meal subscription service where customers select
their meals for a week, and subsequently receive
a box containing all the required ingredients
along with step-by-step cooking instructions for
the chosen recipes. In light of recent economic
difficulties worldwide, Gousto is prioritising
its efficiency to reduce cost and to continue
providing affordable meals to its customers. One
key aspect for Gousto was to improve its station
utilisation, through better routing of boxes
throughout the factory. The use of simulation as
a digital twin has been a key factor in the
development of a new routing algorithm, that has
now been put in production and has increased
station utilisation by 20%, in line with the
simulation's predictions.
pdf
Technical Session · Simulation as Digital Twin
Digital Twins and Energy Systems
Chair: Sanja Lazarova-Molnar (Karlsruhe Institute of
Technology, University of Southern Denmark)
Modeling and Real-time Simulation of Microgrid
Components using SystemC-AMS
Rahul Bhadani (Vanderbilt University, The University of
Alabama in Huntsville); Hao Tu and Srdjan Lukic (North
Carolina State University); and Gabor Karsai (Vanderbilt
University)
Abstract
Abstract
Microgrids are localized power systems that can
function independently or alongside the main
grid. They consist of interconnected generators,
energy storage, and loads that can be managed
locally. Using SystemC-AMS, we demonstrate how
microgrid components, including solar panels and
converters, can be accurately modeled and
simulated, along with their interactions.
Real-time simulations are crucial for
understanding microgrid behavior and optimizing
components. This approach facilitates seamless
integration with hardware prototypes and
automation systems, supporting various
development stages. Our study presents a
best-case scenario for real-time simulation,
assuming each loop takes less time than the
simulation time step, with fallback to the
previous value if data isn't received in time.
This article introduces the first known
real-time simulation strategy using SystemC-AMS,
enabling the real-time simulation of microgrid
components and integration with external
devices. The implementation adopts a model-based
design approach, creating increasingly complex
systems with grid components and controllers.
pdf
Advancing Safety in Nuclear Applications with Reduced
Order Modeling and Digital Twin
Justin Williams, Nicole Hatch, Jean Ragusa, and Jian Tao
(Texas A&M University)
Abstract
Abstract
Ionizing radiation refers to particles or
photons that carry enough energy to remove
electrons from atoms or molecules. Through
ionizing interactions, radiation can have severe
implications for human health and the
environment, making it essential to develop
effective strategies to manage the risks it
poses. To display the potential benefits from
the application of digital twin technologies to
concerns regarding radioactive material in
laboratory, university, and national defense
settings, this paper presents the development of
a digital twin framework, and potential use
cases for the framework. The platform was
demonstrated in two scenario studies. The first
scenario involves a faux radiation-detecting
glovebox used for lab safety education, while
the second scenario addresses training for first
responders in a nuclear defense and safety
situation.
pdf
Simulation as a Soft Digital Twin for Maintenance
Reliability Operations
Xueping Li, Thomas Berg, Gerald Jones, and Kimon Swanson
(University of Tennessee, Knoxville) and Vincent
Lamberti, Luke Birt, and Pugazenthi Atchayagopal
(Consolidated Nuclear Security, LLC)
Abstract
Abstract
A critical facility's reliability relies heavily
on its maintenance process's effectiveness. This
process involves numerous sub-processes, which
can be challenging to model due to uncertainties
and complexities. System managers often seek a
predictive tool, and this work extends a
previous study that developed a digital twin of
a nuclear facility's maintenance task process
using data-driven and stochastic modeling, along
with expert input. The authors extended the
project's previous iteration by enhancing the
bootstrapping technique and improving the
model's fidelity.
pdf
Technical Session · Simulation as Digital Twin
Digital Twins and Warehouse Logistics
Chair: Edward Y. Hua (MITRE Corporation)
Renovation Logistics Park with Digital Twinning: A
Simulation-Optimization-Powered Toolbox
Peixue Yuan (Northwestern Polytechnical University), Chi
Zhang (Xi'an Jiaotong University), and Chenhao Zhou and
Li Xue (Northwestern Polytechnical University)
Abstract
Abstract
Taking into account the crucial node of the
logistics network, this paper concentrates on
the layout design problem of logistics parks
considering numerous uncertain factors during
operations. To provide comprehensive support for
park planners and managers, a
simulation-optimization-powered toolbox is
developed for decision-making, with core
functions such as park layout design,
construction quantity calculations, and
performance evaluations. A case study
demonstrates the toolbox's effectiveness in
assisting users to achieve their desired layout
designs, and the result shows that the optimized
layout generated by the toolbox can lead to
improvements of approximately 13%.
pdf
A Simulation Optimization Method for Scheduling
Automated Guided Vehicles in a Stochastic Warehouse
Management System
Gongbo Zhang, Xiaotian Liu, and Yijie Peng (Peking
University)
Abstract
Abstract
We consider the problem of scheduling automated
guided vehicles (AGVs) in a stochastic warehouse
management system. This problem was studied in
the Case Study Competition of the 2022 Winter
Simulation Conference. We propose a simulation
optimization method that simultaneously
optimizes dispatching and route planning for
AGVs to enhance the system performance.
Experimental results on two warehouse system
simulation scenarios demonstrate that the
proposed method outperforms the default method.
pdf
Emulation and Digital Twin Framework for the
Validation of Material Handling Equipment in Warehouse
Environments
Ankit Pandey, Rachael Flam, Raashid Mohammed, and Achuta
Kalidindi (Amazon)
Abstract
Abstract
With modern warehouses becoming more automated,
there is a growing opportunity to test and
validate material handling concepts throughout
the project life cycle. Emulation and digital
twin pose a capability for material handling
system validation from the ideation stage
through post-implementation. An emulation model
is a virtual replica of a physical system, and
digital twin is a transformation of an emulation
model via connection to a virtual or physical
controller. They can test factors such as design
mechanics and layouts, calculate throughput,
test controls logic, and perform product flow
analysis. Evaluation of these factors can
provide a relatively accurate metric for system
performance and lead to a more comprehensive
return on investment (ROI) analysis. This paper
discusses how incorporation of emulation and
digital twin into all stages of the project life
cycle of material handling systems can improve
system efficiency and prevent live system
commissioning risk.
pdf
Technical Session · Simulation as Digital Twin
Digital Twins and Manufacturing
Chair: Cathal Heavey (University of Limerick)
Simulation Based High Fidelity Digital Twins of
Manufacturing Systems: An Application Model and
Industrial Use Case
Ali Ahmad Malik (Oakland University)
Abstract
Abstract
Modern manufacturing systems are required to be
developed, commissioned, and reconfigured faster
than ever before. Conventional methods for the
development of manufacturing systems are
time-consuming due to their sequential nature. A
digital twin is an emerging technology that can
offer a high-fidelity simulation of a real
manufacturing system including its kinematics,
automation program, behavior, user interface,
and production parameters. Such a unified
digital twin can be used as a support tool for
verification and validation of complex behavior
of modern-day manufacturing systems during
design, commissioning, reconfiguration,
maintenance, and for end-of-life. The resulting
benefits are to speed up the development and
reconfiguration phases and improve system
reliability. This article presents a framework
to develop and use a digital twin for the
development of complex machines. An industrial
case from a large automation company is
presented.
pdf
Data Requirements for a Digital Twin of a Robot
Workcell
Deogratias Kibira (National Institute of Standards and
Technology, University of Maryland - College Park) and
Guodong Shao (National Institute of Standards and
Technology)
Abstract
Abstract
The applications of digital twins continue to
grow with the volume and variety of data
collected. These data support the modeling of
function, behavior, and structure of a physical
element. However, successfully building a
digital twin requires data identification, data
fusion, and data management. Thus, despite the
increase in data availability, there are still
challenges of data usage, especially data
scoping and scaling to implement a digital twin
for a specific purpose. The objective of this
paper is to identify data requirements for
various types of digital twins for a robot
workcell. The identification includes data
description, source, method of collection, and
data formats. The digital twin types include
descriptive digital twins, diagnostics and
prognostics digital twins, prescriptive digital
twins, and intelligent digital twins. The
outcome of this data requirements identification
can be used as a guide for developing and
validating digital twins for a robot workcell
lifecycle.
pdf
A Digital Twin for Production Control Based on
Remaining Cycle Time Prediction
Giovanni Lugaresi (KU Leuven); Pedro Luis Bacelar Dos
Santos, Alex Chalissery Lona, and Monica Rossi
(Politecnico di Milano); Eduardo Zancul (University of
Sao Paulo); and Andrea Matta (Politecnico di Milano)
Abstract
Abstract
The recent industrial context pushed
manufacturers to invest heavily in digitization
for a more efficient use of their equipment and
scarce resources. The digitization of industrial
environments allows the establishment of digital
decision-support tools such as digital twins, to
exploit the shop-floor data for making more
accurate decisions considering the real system
state. Existing literature focuses on the
development of specific digital twin components
as well as methods that are typically developed
and tested without an integration within a
digital twin architecture. This paper proposes a
complete digital twin framework with the purpose
of aiding production planning and control
operations. The focus is on the design of a
production control service that manages the
material flow in the real system using
simulation-based predictions of the remaining
cycle time. Preliminary experiments are done by
applying the digital twin architecture on a
lab-scale model, demonstrating the applicability
of the proposed approach.
pdf
Technical Session · Simulation as Digital Twin
Panel: Enhancing Digital Twins with Advances in Simulation
and Artificial Intelligence: Opportunities and Challenges
Chair: Barry L. Nelson (Northwestern University)
Enhancing Digital Twins with Advances in Simulation
and Artificial Intelligence: Opportunities and
Challenges
Simon J. E. Taylor (Brunel University London), Charles
Macal (Argonne National Laboratory), Andrea Matta
(Politecnico di Milano), Markus Rabe (TU Dortmund
University), Susan Sanchez (Naval Postgraduate School),
and Guodong Shao (National Institute of Standards and
Technology)
Abstract
Abstract
Simulations are used to investigate physical
systems. A digital twin goes beyond this by
connecting a simulation with the physical system
with the purpose of analyzing and controlling
that system in real-time. In the past 5 years
there has been a substantial increase in
research into Simulation and Artificial
Intelligence (AI). The combination of Simulation
with AI presents many possible innovations.
Similarly, combining AI with Simulation presents
further possibilities including approaches to
developing trustworthy and explainable AI
methods, solutions to problems arising from
sparce or no data and better methods for time
series analysis. Given the progress that has
been made in Digital Twins and Simulation and
AI, what opportunities are there from combining
these two exciting research areas? What
challenges need to be overcome to achieve these?
This article discusses these from the
perspectives of six leading members of the
Modeling & Simulation community.
pdf
Track Coordinator - Simulation in Education: Omar Ashour (Penn State University), Christopher Lynch (Old
Dominion University)
Technical Session · Simulation in Education
Tools and Technologies in Simulation Education
Chair: Manuel D. Rossetti (University of Arkansas)
Introducing the Kotlin Simulation Library (KSL)
Manuel D. Rossetti (University of Arkansas)
Abstract
Abstract
This paper introduces a Monte Carlo and
discrete-event simulation library for the Kotlin
programming language. The Kotlin Simulation
Library (KSL) provides functionality to perform
simulation experiments involving the generation
of random processes, the execution of
discrete-event simulation via the event and
process views, and the analysis of the
statistical quantities generated by simulation
models. The architecture of the library
leverages the object-oriented and functional
programming capabilities of the widely used
Kotlin programming language. The library
provides functionality that is similar to
proprietary software, while being open-source
and readily extensible. This paper provides an
overview of the architecture of the library. The
functionality of the library is illustrated
through several examples.
pdf
Teaching Discrete Event Simulation Software Design in
the Context of Computer Engineering
James Frederick Leathrum (Old Dominion University)
Abstract
Abstract
Recent events resulted in the consolidation of a
degree program in Modeling & Simulation
Engineering with a degree in Computer
Engineering, though with a major in Modeling &
Simulation Engineering. The resulting major
strongly highlights the computational aspects of
M&S. However, the needs of discrete event
simulation in computer engineering have somewhat
of a different focus. For instance, the
management of simultaneous events is crucial in
digital circuit simulation. This paper looks at
refocusing a course on discrete event simulation
software design to meet the needs of a computer
engineering degree while maintaining
applicability to the more general community. It
discusses modifications in the treatment of
models and then mapping those models to
software.
pdf
Technical Session · Simulation in Education
Panel: ChatGPT in M&S Education: Opportunities and
Challenges
Chair: Andreas Tolk (The MITRE Corporation)
Chances and Challenges of ChatGPT and Similar Models
for Education in M&S
Andreas Tolk (The MITRE Corporation), Philip Barry
(L3Harris Corporation), Margaret Loper (Georgia Tech
Research Institute), Ghaith Rabadi (University of
Central Florida), William Scherer (University of
Virginia), and Levent Yilmaz (Auburn University)
Abstract
Abstract
This position paper summarizes the inputs of a
group of experts from academia and industry
presenting their view on chances and challenges
of using ChatGPT within Modeling and Simulation
education. The experts also address the need to
evaluate continuous education as well as
education of faculty members to address
scholastic challenges and opportunities while
meeting the expectation of industry. Generally,
the use of ChatGPT is encouraged, but it needs
to be embedded into an updated curriculum with
more emphasis on validity constraints, systems
thinking, and ethics.
pdf
Technical Session · Simulation in Education
Behavioral and Entrepreneurial Aspects in Simulation
Chair: Canan Gunes Corlu (Boston University)
Entrepreneurial Mindset Learning (EML) in Simulation
Education
Michael E. Kuhl (Rochester Institute of Technology)
Abstract
Abstract
An entrepreneurial mindset is associated with
recognizing and seeking opportunity that can
result in societal benefits. Entrepreneurial
minded learning (EML) is a pedagogy that has
gained increasing attention in science,
technology, engineering, and math education. In
this paper, we present as set of examples to
illustrate how EML methods can be applied in
simulation courses to foster the development of
the entrepreneurial mindset of students. In
addition, we discuss some of the opportunities
and challenges for adoption of EML in simulation
education.
pdf
Can Gambling Ads Affect Customer Risk Behavior? A
Simulation Study to the “888” Case
David Lopez-Lopez (ESADE business school), Giovanni
Giusti (Tecnocampus - Pompeu Fabra University), Angel A.
Juan (Universitat Politecnica de Valenci), and Canan
Gunes Corlu (Boston University)
Abstract
Abstract
The aim of this research paper is to investigate
the connection between advertising and consumer
behavior in the gambling industry, which heavily
relies on advertising. Specifically, it examines
the impact of advertising on risky behavior
among consumers, using the well-known Spanish
gambling brand “888 Poker” as a case
study. The experimental design involves a
simulated asset market approach with 92
participants, and the data collected is analyzed
to draw conclusions regarding the relationship
between advertising and risky behavior in the
context of the gambling industry.
pdf
Track Coordinator - Simulation Optimization: David J. Eckman (Texas A&M University), Siyang Gao (City
University of Hong Kong)
Technical Session · Simulation Optimization
Ranking and Selection I
Chair: Travis Goodwin (MITRE Corporation)
Risk-Sensitive Ordinal Optimization
Dohyun Ahn (The Chinese University of Hong Kong) and
Taeho Kim (Texas A&M University)
Abstract
Abstract
We consider the problem of risk-sensitive
ordinal optimization, which aims to identify the
"least risky'' system among a finite number of
stochastic systems. Each system's riskiness is
assumed to be measured by the probability that
the system's loss exceeds a common threshold.
Since the crude Monte Carlo estimator is highly
inefficient in estimating rare-event
probabilities, conventional ordinal optimization
approaches coupled with that estimator show
significant performance degradation in this
problem, particularly for sufficiently large
loss thresholds. To detour this issue, assuming
that the parametric form of the underlying
distribution is known, we propose to use the
tail parameter, a function of distributional
parameters, as a surrogate for the loss
probability in comparing and ranking systems,
which is shown to work well for many well-known
distributions. Building upon this observation,
we find the optimal computing budget allocation
scheme that maximizes the likelihood of
identifying the least risky system.
pdf
Data-Driven Optimal Allocation for Ranking and
Selection under Unknown Sampling Distributions
Ye Chen (Virginia Commonwealth University)
Abstract
Abstract
Ranking and selection (R&S) is the problem of
identifying the optimal alternative from
multiple alternatives through sampling them. In
the existing R&S literature, sampling
distributions of the observations are usually
assumed to be from some known parametric
distribution families, even in works that
consider input uncertainty. By contrast, this
paper considers R&S under completely unknown
sampling distributions. We for the first time
propose a computationally-tractable
nonparametric tuning-free sequential budget
allocation strategy that can asymptotically
achieve the optimal allocation specified by
large deviation analysis. Especially, we propose
a new point estimation approach for estimating
the optimal large deviation rates directly,
which efficiently solves the challenge of
estimating large deviation rate functions for
lack of known sampling distributions.
pdf
POMDP-based Ranking and Selection
Ruihan Zhou and Yijie Peng (China)
Abstract
Abstract
In this paper, we formulate the ranking and
selection (R&S) problem as a stochastic control
problem under the Bayesian framework. We propose
to use particle filter to approximate the
posterior distribution of states under the
general Bayesian framework. The learning and
decision are treated under the umbrella of a
partially observable Markov decision process and
a rollout policy based on Monte Carlo simulation
is proposed. This policy can use one or more
classic R&S approaches as base policies to
efficiently learn the value function by rolling
out simulation trajectories. We present
numerical examples to demonstrate the
effectiveness of the rollout policy and the
performance of our policy is significantly
improved relatively to the base policies.
pdf
Technical Session · Simulation Optimization
Ranking and Selection II
Chair: Ye Chen (Virginia Commonwealth University)
Top-Two Thompson Sampling for Selecting
Context-Dependent Best Designs
Xinbo Shi, Yijie Peng, and Gongbo Zhang (Guanghua School
of Management, Peking University)
Abstract
Abstract
We consider a contextual ranking and selection
problem which aims to identify the
best-performing alternative for each context.
The performance is measured by an arbitrary
identifiable statistical characteristic. Under a
Bayesian framework, we establish the posterior
large deviation ratios for general adaptive
sampling policies. We propose an efficient
sampling policy based on top-two Thompson
sampling, which is proven to be consistent.
Numerical experiments demonstrate that the
proposed algorithm outperforms existing
algorithms under both Gaussian and non-Gaussian
settings.
pdf
Epsilon Optimal Sampling
Travis Goodwin (MITRE Corporation), Jie Xu (George Mason
University), Nurcin Celik (University of Miami), and
Chun-Hung Chen (George Mason University)
Abstract
Abstract
Epsilon Optimal Sampling (EOS) is a novel
algorithm that seeks to reduce the computational
complexity of selecting the best design using
stochastic simulation. EOS is an Optimal
Computing Budget Allocation (OCBA) type
algorithm that reduces computational complexity
by integrating machine learning (ML) models into
the simulation optimization algorithm. EOS
avoids the pitfall of trading computational
overhead in simulation execution for
computational overhead in ML model training by
using a concept we call policy stability. In
this paper, we present the concept of policy
stability, how it can be used to improve dynamic
sampling techniques, and how low-fidelity ML
estimates can be integrated into the process.
Numerical results are presented to provide
evidence as to the improvement in computational
efficiency that can be achieved when using EOS
in conjunction with ML models over the standard
OCBA algorithm.
pdf
Adaptive Ranking and Selection Based Genetic
Algorithms for Data-driven Problems
Kimia Vahdat and Sara Shashaani (North Carolina State
University)
Abstract
Abstract
We present ARGA–Adaptive Robust Genetic
Algorithm–to optimize zero-one simulation
problems by incorporating input uncertainty. In
ARGA, a surviving population of solutions
evolves as more information about the
high-dimensional problem affected by
stochasticity becomes available. A ranking and
selection operation in each iteration is
enhanced with a debiasing mechanism of fitness
values using fast iterated bootstraps and
control variates. Debiasing reduces the model
risk from input uncertainty bias, obtaining a
more accurate ranking of the current surviving
solutions. Given the double loop of function
evaluations, we adaptively increase budget only
if the current population’s proximity to
optimality signals the need for a smaller
standard error. In that case, we allocate
additional replications to the input model of a
current surviving solution that is most
responsible for risk. The empirical results with
a fixed optimization budget demonstrate that
ARGA obtains significantly better solutions in a
feature selection problem on various datasets.
pdf
Technical Session · Simulation Optimization
Sampling in Optimization
Chair: Yunsoo Ha (North Carolina State University)
Parameter Optimization with Conscious Allocation
(POCA)
Joshua Inman, Tanmay Khandait, Giulia Pedrielli, and
Lalitha Sankar (Arizona State University)
Abstract
Abstract
The performance of modern machine learning
algorithms depends upon the selection of a set
of hyperparameters. Common examples of
hyperparameters are learning rate and the number
of layers in a dense neural network. Auto-ML is
a branch of optimization that has produced
important contributions in this area. Within
Auto-ML, hyperband-based approaches, which
eliminate poorly-performing configurations after
evaluating them at low budgets, are among the
most effective. However, the performance of
these algorithms strongly depends on how
effectively they allocate the computational
budget to various hyperparameter configurations.
We present the new Parameter Optimization with
Conscious Allocation (POCA), a hyperband-based
algorithm that adaptively allocates the inputted
budget to the hyperparameter configurations it
generates following a Bayesian sampling scheme.
We compare POCA to its nearest competitor at
optimizing the hyperparameters of an artificial
toy function and a deep neural network and find
that POCA finds strong configurations faster in
both settings.
pdf
Cluster-based Sampling Allocation for Multi-fidelity
Simulation Optimization
Zirui Cao (National University of Singapore); Haowei
Wang (Rice-Rick Digitalization PTE. Ltd.); and Haobin
Li, Ek Peng Chew, and Kok Choon Tan (National University
of Singapore)
Abstract
Abstract
Simulation optimization is widely used to
optimize complex systems. High-fidelity
simulation can be expensive, especially when the
number of designs is large. In practice, fast
but less accurate low-fidelity simulation is
often available and can provide valuable
information. In this paper, we propose a
sampling algorithm that utilizes information
from multiple fidelity simulation models to
improve the efficiency of searching for the best
design. A k-means algorithm is introduced to
help capture the performance clustering
phenomenon among designs, and a cluster validity
index is proposed to determine the optimal
number of clusters. The proposed sampling
algorithm can incorporate the information of
performance clusters and approximately minimize
the expected opportunity cost of the selected
best design. Numerical results substantiate the
superior performance of the proposed algorithm.
pdf
Dynamic Stratification and Post-stratified Adaptive
Sampling for Simulation Optimization
Pranav Jain and Sara Shashaani (North Carolina State
University)
Abstract
Abstract
Post-stratification is a variance reduction
technique that groups samples in respective
strata only after collecting the samples
randomly. We incorporate this technique within
an adaptive sampling procedure in simulation
optimization. We use concomitant variables to
increase the accuracy of our proposed
post-stratified adaptive sampling. Concomitant
variables are auxiliary variables in simulation
that approximate the boundaries of the optimal
strata at each visited solution during the
optimization procedure. A linear relationship
between the concomitant variable and the output
is desirable but not necessary for the
effectiveness of the proposed methodology. In
numerical experiments, we observe that
performing post-stratified adaptive sampling
with dynamically updated strata boundaries
robustifies the algorithm in the sense that it
reduces the algorithm's sensitivity to the
initial solution and solver input parameters.
pdf
Technical Session · Simulation Optimization
Gaussian Process Surrogates
Chair: Zirui Cao (National University of Singapore)
Simulation Optimization with Multiple Attempts
Jingjun Men and Zhihao Liu (Southern University of
Science and Technology), Haowei Wang (Rice-Rick
Digitalization PTE. Ltd.), and Songhao Wang (Southern
University of Science and Technology)
Abstract
Abstract
Simulation optimization is a widely utilized
approach that allows decision-makers to test
various decision variable settings in simulators
before implementing a final recommended action
on the real systems. In some real-world
scenarios, the recommended action can be
executed multiple times and the performance is
evaluated as the best one among these multiple
attempts. In this paper, we introduce such
simulation optimization problem with multiple
attempts and provide insights of the problem
through comparison to risk-averse decision
making problem. We propose a surrogate-assisted
algorithm based on the Gaussian process model
and the upper confidence bound criterion for
efficiently solving such problems. We
demonstrate the efficiency and effectiveness of
the proposed approach with several numerical
examples.
pdf
Hyperparameter Adaptive Search for Surrogate
Optimization: A Self-Adjusting Approach
Nazanin Nezami and Hadis Anahideh (University of
Illinois Chicago)
Abstract
Abstract
Surrogate Optimization (SO) algorithms have
shown promise for optimizing expensive black-box
functions. However, their performance is heavily
influenced by hyperparameters related to
sampling and surrogate fitting, which poses a
challenge to their widespread adoption. We
investigate the impact of hyperparameters on
various SO algorithms and propose a
Hyperparameter Adaptive Search for SO (HASSO)
approach. HASSO is not a hyperparameter tuning
algorithm, but a generic self-adjusting SO
algorithm that dynamically tunes its own
hyperparameters while concurrently optimizing
the primary objective function, without
requiring additional evaluations. The aim is to
improve the accessibility, effectiveness, and
convergence speed of SO algorithms for
practitioners. Our approach identifies and
modifies the most influential hyperparameters
specific to each problem and SO approach,
reducing the need for manual tuning without
significantly increasing the computational
burden. Experimental results demonstrate the
effectiveness of HASSO in enhancing the
performance of various SO algorithms across
different global optimization test problems.
pdf
Approximate Gaussian Process Regression with Pairwise
Comparison Data
Efe Sertkaya and Ilya Ryzhov (University of Maryland)
Abstract
Abstract
We use approximate Bayesian inference, together
with Gaussian process regression, to create a
new estimator for an unknown function in a
situation where we can only observe pairwise
comparisons of function values at different
inputs. Preliminary experimental results suggest
that, although information is heavily censored
in this setting, it may still be possible to
learn the local and global minima of the
underlying function. We discuss possible
sampling criteria, and explore the performance
of the "probability of improvement" strategy
numerically.
pdf
Technical Session · Simulation Optimization
Continuous Optimization
Chair: Meichen Song (Stony Brook University)
Towards Greener Stochastic Derivative-Free
Optimization with Trust Regions and Adaptive
Sampling
Yunsoo Ha and Sara Shashaani (North Carolina State
University)
Abstract
Abstract
Adaptive sampling-based trust-region
optimization has emerged as an efficient solver
for nonlinear and nonconvex problems in noisy
derivative-free environments. This class of
algorithms proceeds by iteratively constructing
local models on objective function estimates
that use a carefully chosen number of calls to
the stochastic oracle. In this paper, we
introduce a refined version of this class of
algorithms that reuse the information from
previous iterations. The advantage of this
approach is reducing computational burden
without sacrificing consistency or work
complexity to attain the same level of
optimality, which we demonstrate through
numerical results using the SimOpt library.
pdf
Stochastic Adaptive Regularization Method with
Cubics: A High Probability Complexity Bound
Katya Scheinberg and Miaolan Xie (Cornell University)
Abstract
Abstract
We present a high probability complexity bound
for a stochastic adaptive regularization method
with cubics, also known as regularized Newton
method. The method makes use of stochastic
zeroth-, first- and second-order oracles that
satisfy certain accuracy and reliability
assumptions. Such oracles have been used in the
literature by other stochastic adaptive methods,
such as trust region and line search. These
oracles capture many settings, such as expected
risk minimization, stochastic zeroth-order
optimization, and others. In this paper, we give
the first high probability iteration bound for
stochastic cubic regularization, and show that
just as in the deterministic case, it is
superior to other stochastic adaptive methods.
pdf
A Projection-Based Algorithm for Solving Stochastic
Inverse Variational Inequality Problems
Zeinab Alizadeh, Felipe Parra Polanco, and Afrooz
Jalilzadeh (The University of Arizona)
Abstract
Abstract
We consider a stochastic Inverse Variational
Inequality (IVI) problem defined by a continuous
and co-coercive map over a closed and convex
set. Motivated by the absence of performance
guarantees for stochastic IVI, we present a
variance-reduced projection-based gradient
method. Our proposed method ensures an almost
sure convergence of the generated iterates to
the solution, and we establish a convergence
rate guarantee. To verify our results, we apply
the proposed algorithm to a network equilibrium
control problem.
pdf
Technical Session · Simulation Optimization
Learning for Optimization
Chair: Peter J Haas (University of Massachusetts
Amherst)
Efficient Hybrid Simulation Optimization via Graph
Neural Network Metamodeling
Wang Cen and Peter Haas (University of Massachusetts
Amherst)
Abstract
Abstract
Simulation metamodeling is essential for
speeding up optimization via simulation to
support rapid decision making. During
optimization, the metamodel, rather than
expensive simulation, is used to compute
objective values. We recently developed
graphical neural metamodels (GMMs) that use
graph neural networks to allow the graphical
structure of a simulation model to be treated as
a metamodel input parameter that can be varied
along with scalar inputs. In this paper we
provide novel methods for using GMMs to solve
hybrid optimization problems where both
real-valued input parameters and graphical
structure are jointly optimized. The key ideas
are to modify Monte Carlo tree search to
incorporate both discrete and continuous
optimization and to leverage the automatic
differentiation infrastructure used for neural
network training to quickly compute gradients of
the objective function during stochastic
gradient descent. Experiments on stochastic
activity network and warehouse models
demonstrate the potential of our method.
pdf
Policy-Augmented Bayesian Network Optimization with
Global Convergence
Junkai Zhao (Shanghai Jiao Tong University), Wei Xie
(Northeastern University), and Jun Luo (Shanghai Jiao
Tong University)
Abstract
Abstract
Driven by critical challenges in
biomanufacturing, including high complexity and
high uncertainty, we propose global optimization
methods on the policy-augmented Bayesian network
(PABN), characterizing risk- and science-based
understanding of underlying bioprocess
mechanisms, to guide the optimal control. We
first develop a sequential optimization
algorithm based on deep kernel learning (DKL)
for PABN with general state transition dynamics,
which can learn the spatial dependence of mean
response through a deep neural network. In
addition, to improve the interpretability and
computational efficiency of policy optimization,
a global metamodel is introduced to guide linear
Gaussian PABN optimization, which explicitly
accounts for the correlation of input-to-output
pathways obtained under different candidate
policies. Our empirical study provides the
ablation analysis and the interpretation
analysis of the DKL, and also shows that both
proposed approaches demonstrate promising
performance compared to the standard Bayesian
optimization with Gaussian process.
pdf
Simultaneous Perturbation-Based Stochastic
Approximation for Quantile Optimization
Best Contributed Theoretical Paper - Finalist
Meichen Song and Jiaqiao Hu (Stony Brook University) and
Michael C. Fu (University of Maryland, College Park)
Abstract
Abstract
We study a gradient-based algorithm for solving
differentiable quantile optimization problems
under a black-box scenario. The algorithm finds
improved solutions along the descent direction
of the quantile objective function, which is
approximated at each step using a simultaneous
perturbation technique that involves the
difference quotient of the output random
variables. Compared to existing quantile
optimization methods, our algorithm has a
two-timescale stochastic approximation structure
and uses only three observations of the output
random variable per iteration without requiring
knowledge of the underlying system model. We
show the local convergence of the algorithm and
establish a finite-time bound on the convergence
rate of the algorithm. Numerical results are
also presented to illustrate the algorithm.
pdf
Technical Session · Simulation Optimization
Performance Indicators and Matrix Approximation
Chair: Sara Shashaani (North Carolina State University)
Properties of Several Performance Indicators for
Global Multi-Objective Simulation Optimization
Susan R. Hunter and Burla E. Ondes (Purdue University)
Abstract
Abstract
We discuss the challenges in constructing and
analyzing performance indicators for
multi-objective simulation optimization (MOSO),
and we examine properties of several performance
indicators for assessing algorithms designed to
solve MOSO problems to global optimality. Our
main contribution lies in the definition and
analysis of a modified coverage error; the
modification to the coverage error enables us to
obtain an upper bound that is the sum of
deterministic and stochastic error terms. Then,
we analyze each error term separately to obtain
an overall upper bound on the modified coverage
error that is a function of the dispersion of
the visited points in the compact feasible set
and the sampling error of the objective function
values at the visited points. The upper bound
provides a foundation for future mathematical
analyses that characterize the rate of decay of
the modified coverage error.
pdf
Stochastic Constraints: How Feasible is
Feasible?
David Eckman (Texas A&M University), Shane Henderson
(Cornell University), and Sara Shashaani (North Carolina
State University)
Abstract
Abstract
Stochastic constraints, which constrain an
expectation in the context of simulation
optimization, can be hard to conceptualize and
harder still to assess. As with a deterministic
constraint, a solution is considered either
feasible or infeasible with respect to a
stochastic constraint. This perspective belies
the subjective nature of stochastic constraints,
which often arise when attempting to avoid
alternative optimization formulations with
multiple objectives or an aggregate objective
with weights. Moreover, a solution's feasibility
with respect to a stochastic constraint cannot,
in general, be ascertained based on only a
finite number of simulation replications. We
introduce different means of estimating how
"close" the expected performance of a given
solution is to being feasible with respect to
one or more stochastic constraints. We explore
how these metrics and their bootstrapped error
estimates can be incorporated into plots showing
a solver's progress over time when solving a
stochastically constrained problem.
pdf
Column Subset Selection and Nyström
Approximation via Continuous Optimization
Anant Mathur, Sarat Moka, and Zdravko Botev (UNSW)
Abstract
Abstract
We propose a continuous optimization algorithm
for the Column Subset Selection Problem (CSSP)
and Nyström approximation. The CSSP and
Nyström method construct low-rank
approximations of matrices based on a
predetermined subset of columns. It is well
known that choosing the best column subset of
size k is a difficult combinatorial problem. In
this work, we show how one can approximate the
optimal solution by defining a penalized
continuous loss function that is minimized via
stochastic gradient descent. We show that the
gradients of this loss function can be estimated
efficiently using matrix-vector products with a
data matrix X in the case of the CSSP or a
kernel matrix K in the case of the Nyström
approximation. We provide numerical results for
a number of real datasets showing that this
continuous optimization is competitive against
existing methods.
pdf
Technical Session · Simulation Optimization
Queueing Systems and Experiment Design
Chair: David J. Eckman (Texas A&M University)
Sequential Simulation Optimization with Censoring: An
Application to Bike Sharing Systems
Cedric Gibbons (Chilean Navy), James Grant (Lancaster
University), and Roberto Szechtman (Naval Postgraduate
School)
Abstract
Abstract
Sequential Simulation Optimization is an online
optimization framework where an operator
iterates periodically between collecting data
from a real-world system, using stochastic
simulation to approximate the optimal values of
some operational variables, and setting some
choice of variables in the system for the next
period. The aim is to converge to an optimum
efficiently, as uncertainty due to finite data
and finitely many simulations eventually
reduces. Using Bike Sharing Systems (BSS) as a
motivating example, we analyze a variant where
data from the real-world system is subject to
censoring, whose nature depends on the system
variables selected by the operator. In the BSS
setting, censoring is of customer demand, or
slots in which to drop bikes off in. We show
that a method built upon Sample Average
Approximation attains asymptotically vanishing
error in its parameter estimates and
specification of the optimal operational
variables.
pdf
SF-SFD: Stochastic Optimization of Fourier
Coefficients to Generate Space-Filling Designs
Manisha Garg (University of Illinois Urbana-Champaign,
Argonne National Laboratory) and Tyler H. Chang and
Krishnan Raghavan (Argonne National Laboratory)
Abstract
Abstract
Due to the curse of dimensionality, it is often
prohibitively expensive to generate
deterministic space-filling designs. On the
other hand, when using naive uniform random
sampling to generate designs cheaply, design
points tend to concentrate in a small region of
the design space. Although, it is preferable in
these cases to utilize quasi-random techniques
such as Sobol sequences and Latin hypercube
designs over uniform random sampling in many
settings, these methods have their own caveats
especially in high-dimensional spaces. In this
paper, we propose a technique that addresses the
fundamental issue of measure concentration by
updating high-dimensional distribution functions
to produce better space-filling designs. Then,
we show that our technique can outperform Latin
hypercube sampling and Sobol sequences by the
discrepancy metric while generating
moderately-sized space-filling samples for
high-dimensional problems.
pdf
Uncertainty Quantification and Robust Simulation
Track Coordinator - Uncertainty Quantification and Robust
Simulation: Xi Chen (Virginia Tech), Wei Xie (Northeastern
University)
Technical Session · Uncertainty Quantification and Robust Simulation
Optimization under Input Uncertainty and Model Calibration
Chair: Guangwu Liu (City University of Hong Kong)
Upper-Confidence-Bound Procedure for Robust Selection
of the Best
Yuchen Wan (Fudan University); Weiwei Fan (Tongji
University); and L. Jeff Hong (Fudan University, School
of Management)
Abstract
Abstract
Robust selection of the best (RSB) is an
important problem in the simulation area, when
there exists input uncertainty in the underlying
simulation model. RSB models this input
uncertainty by a discrete ambiguity set and then
proposes a two-layer framework under which the
best alternative is defined to have the best
worst-case mean performance over the ambiguity
set. In this paper, we adopt a fixed-budget
framework to address the RSB problem.
Specifically, in contrast with existing
procedures, we develop a new robust
upper-confidence-bound (UCB) procedure, named as
R-UCB. We can show that, the R-UCB procedure
successfully inherits the simplicity and
convergence guarantee of the traditional UCB
procedure. Furthermore, simulation experiments
demonstrate that the R-UCB procedure numerically
outperforms the existing RSB procedures.
pdf
Input Data Collection versus Simulation: Simultaneous
Resource Allocation
Yuhao Wang and Enlu Zhou (Georgia Institute of
Technology)
Abstract
Abstract
This paper investigates the problem of ranking
and selection under input uncertainty with
simultaneous resource allocation. In this
problem, two types of resources are sequentially
allocated at the same time to collect input data
to reduce input uncertainty and run simulations
to reduce stochastic uncertainty. We formulate
the simultaneous resource allocation problem as
a concave optimization problem that aims to
maximize the asymptotic probability of correct
selection (PCS) through the allocation policy
for both input data collection and simulation,
based on a moving-average estimator for
aggregation of simulation outputs and its
asymptotic normality. The two optimal policies
are interdependent since they jointly affect the
PCS. We derive the optimality equations to
characterize the optimal policies and develop a
fully sequential algorithm that demonstrates
high efficiency through numerical experiments.
pdf
Representative Calibration Using Black-box
Optimization and Clustering
Serin Lee, Pariyakorn Maneekul, and Zelda B. Zabinsky
(University of Washington)
Abstract
Abstract
Calibration is a crucial step for model
validity, yet its representation is often
disregarded. This paper proposes a two-stage
approach to calibrate a model that represents
target data by identifying multiple diverse
parameter sets while remaining computationally
efficient. The first stage employs a black-box
optimization algorithm to generate near-optimal
parameter sets, the second stage clusters the
generated parameter sets. Five black-box
optimization algorithms, namely, Latin Hypercube
Sampling (LHS), Sequential Model-based Algorithm
Configuration (SMAC), Optuna, Simulated
Annealing (SA), and Genetic Algorithm (GA), are
tested and compared using a disease-opinion
compartmental model with predicted health
outcomes. Results show that LHS and Optuna allow
more exploration and capture more variety in
possible future health outcomes. SMAC, SA, and
GA, are better at finding the best parameter set
but their sampling approach generates less
diverse model outcomes. This two-stage approach
can reduce computation time while producing
robust and representative calibration.
pdf
Technical Session · Uncertainty Quantification and Robust Simulation
Uncertainty Quantification
Chair: Hong Wan (North Carolina State University)
Resampling Stochastic Gradient Descent Cheaply
Henry Lam and Zitong Wang (Columbia University)
Abstract
Abstract
Stochastic gradient descent (SGD) or stochastic
approximation has been widely used in model
training and stochastic optimization. While
there is a huge literature on analyzing its
convergence, inference on the obtained solutions
from SGD has only been recently studied, yet is
important due to the growing need for
uncertainty quantification. We investigate two
easily implementable resampling-based methods to
construct confidence intervals for SGD
solutions. One uses multiple, but few, SGDs in
parallel via resampling with replacement from
the data, and another operates this in an online
fashion. Our methods can be regarded as
enhancements of established bootstrap schemes to
substantially reduce the computation effort in
terms of resampling requirements, while at the
same time bypasses the intricate mixing
conditions in existing batching methods. We
achieve these via a recent cheap bootstrap idea
and Berry-Esseen-type bound for SGD.
pdf
Input Uncertainty Quantification Via Simulation
Bootstrapping
Manjing Zhang (Guangdong Laboratory of Artificial
Intelligence and Digital Economy (SZ)), Guangwu Liu
(City University of Hong Kong), Shan Dai (Shenzhen
Research Institute of Big Data), and Yulin He (Guangdong
Laboratory of Artificial Intelligence and Digital
Economy (SZ))
Abstract
Abstract
Input uncertainty, which refers to the output
variability arising from statistical noise in
specifying the input models, has been
intensively studied recently. Ignoring input
uncertainty often leads to poor estimates of
system performance. In the non-parametric
setting, input uncertainty is commonly estimated
via bootstrap, but the performance by
traditional bootstrap resampling is compromised
when input uncertainty is also associated with
simulation uncertainty. Nested simulation is
studied to improve the performance by taking
variance estimation into account, but suffers
from a substantial burden on required simulation
effort. To tackle the above problems, this paper
introduces a non-nested method to build
asymptotically valid confidence intervals for
input uncertainty quantification. The
convergence properties are studied, which
establish statistical guarantees for the
proposed estimators related to real-data size
and bootstrap budget. An easy-implemented
algorithm is also provided. Numerical examples
show that the estimated confidence intervals
perform satisfactorily under given confidence
levels.
pdf
Asymptotic Normality of Joint Metamodel-Based Sobol'
Index Estimators
Jingtao Zhang, Xi Chen, and Ruochen Wang (Virginia Tech)
Abstract
Abstract
This paper proposes two joint metamodel-based
Sobol' index estimators and investigates their
asymptotic properties. The numerical evaluation
corroborates the theoretical results and
highlights the impact of the combination of
training sample size and Monte Carlo sample size
on the estimators' performance.
pdf
Technical Session · Uncertainty Quantification and Robust Simulation
Input Modeling and Optimization via Machine Learning
Chair: Jingtao Zhang (Virginia Tech)
An Intelligent Framework to Maximize Individual
Driver Income
Fang Chen and Hua Cai (Purdue University) and Hong Wan
(North Carolina State University)
Abstract
Abstract
The ridesharing platform has significantly
changed how taxis operate in recent years. Most
previous works focus on improving the user
experience and maximizing the revenue from the
platform or system level. The individual driver
benefits are rarely addressed. In this work, we
propose a deep reinforcement learning-based
framework to help the individual driver maximize
their daily income via order selections and
self-repositioning. We first formulated the taxi
operation as a Markov Decision Process. Then we
created a multi-agent simulation consisting of
the taxi drivers that use different strategies.
A deep Q network-based (DQN) framework is
proposed for drivers to learn which orders to
select and where to reposition. Our result shows
the driver who adopts the DQN framework
outperforms all other drivers. Furthermore, we
also found that the optimal policy does not
suggest the driver operating in particular areas
but recommends selecting orders with $5 to $7.5
taxi fare.
pdf
Virtual Wearable Sensor Data Generation with
Generative Adversarial Networks
Yining Huang and Hong Wan (North Carolina State
University) and Xi Chen (Virginia Tech)
Abstract
Abstract
This study delves into the utilization of
Generative Adversarial Networks (GANs) for
generating subject-specific time series sensor
data, offering an innovative alternative to
traditional metamodel-based simulations. We
undertake an in-depth analysis of DoppelGANger,
a prominent GAN variant for time series data and
metadata generation, evaluating its efficiency
and efficacy. The sensor data for this
investigation was sourced from the National
Health and Nutrition Examination Survey, which
served as the foundational training set. We
scrutinized the synthesized sensor data
corresponding to various physical attributes,
focusing on the temporal and multi-dimensional
statistical properties. Our empirical findings
underscore the potential of GANs to adeptly
capture the time-dependent correlations and the
intricate statistical characteristics inherent
in multi-dimensional data. This insight into
GANs' capabilities is a crucial step towards
more sophisticated synthetic data generation,
with significant implications for future
applications in wearable technology and
personalized health monitoring systems.
pdf
Track Coordinator - Vendor: Aristotelis Thanos (University of Miami), Edward Williams
(PMC)
Vendor Session · Vendor
Simulation Software for Manufacturing
Chair: Nurcin Celik (University of Miami)
Introducing Mozart Fab Wise: a Cloud-based Simulation
Solution for Semiconductor Fabs
Keyhoon Ko (VMS Global, Inc.)
Abstract
Abstract
In response to the intricate planning and
scheduling challenges encountered in the
semiconductor industry, VMS leverages its
extensive 20-year experience to introduce MOZART
Fab WISE, a dedicated cloud-based simulation
solution. Fab WISE offers an array of data
interfaces, enabling the generation of
comprehensive data and rich analytical reports.
Customers have the flexibility to customize the
level of modeling detail based on their specific
objectives, with the capacity to conduct both
short-term and long-term simulations. Remarkably
adaptable, Fab WISE can function as a blueprint
for capacity planning (CP), factory planning
(FP), and real-time scheduling (RTS), making it
a versatile solution tailored to
customer-specific requirements.
Chiaha Discrete Rate Simulation
Andrew Siprelle (Chiaha.ai)
Abstract
Abstract
Discrete Rate Simulation (DRS) has been a key
enabling technology used to address canonical
problems in high-speed manufacturing. In this
talk, we review the history of DRS from its
creation 25 years ago, to our revolutionary new
DRS engine and associated tools. Let Chiaha help
you accelerate your "raw data to prediction"
journey!
Vendor Session · Vendor
Innovative Simulation Tools
Chair: Bahar Biller (SAS Institute, Inc)
Three Recent Advances in Simio: Auto-create, Advanced
Traffic Control, and DDMRP
Jeffrey Smith and David Sturrock (Simio LLC)
Abstract
Abstract
This talk discusses and demonstrates three
recent advances in Simio. The first is
Simio’s Data Driven/Data Generated
modeling approach using Simio custom objects,
data tables, and the AutoCreateInstance and
Create Element methods. While the objects in the
Simio Standard Library are very flexible, custom
objects can take your models to the next level.
Furthermore, the “Create Object From
This” and “Update Property Defaults
From This” functions make the creation and
maintenance of custom objects extremely easy.
The second topic is Simio’s advanced
traffic control features which significantly
simplify deadlock prevention and path planning
for systems with bi-directional links. Finally,
the third topic is Simio’s new DDMRP
(Demand-driven Materials Requirement Planning)
tools. These features include the DDMRP
replenishment method as part of the existing
Inventory Element, DDMRP specific calculators
with associated data table schema/templates for
inputs and outputs, and DDMRP Specific
Dashboards.
Enterprise Resource Simulator: Simulating Without
Limits
Michel Hoffmeijer (InControl Enterprise Dynamics) and
Fred Jansma (Incontrol Enterprise Dynamics)
Abstract
Abstract
Enterprise Resource Simulator (ERS) is a
simulation platform that focuses on speed and
versatility. It allows for models that are very
large while still offering good performance. ERS
does this by utilizing the full capabilities of
modern computers in terms of efficient and
scalable multi-threading. In addition to pure
scale, ERS allows the models to have more depth
and complexity by allowing multiple different
formalisms in the same model. In addition to the
features of the models, ERS is built to support
multiple programming languages and to allow a
user or a developer to build a full application
upon it. This means that ERS can fulfill all
simulation needs.
Vendor Session · Vendor
Integrating AI and Simulation
Chair: John Shortle (George Mason University)
SmartFactory AI Productivity Utilizing
Simulation
Samantha Duchscherer (Applied Materials)
Abstract
Abstract
Accurately simulating a semiconductor
environment is challenging. Tools and processing
steps are constantly evolving due to
advancements in technology nodes and other
unforeseen manufacturing modifications. However,
AutoSched has out of the box capabilities to
accurately simulate a particular tooling area as
well as an entire facility. Models are also
customizable to handle robust scenarios ranging
from modifying how routes are built to varying
the number of bottleneck stations. This
flexibility makes AutoSched a key component in
the data preparation phase for deploying various
AI use cases. Here we will demonstration the
capabilities of AutoSched modeling key factors
inherent to semiconductor manufacturing and
showcase how this enables AI innovations and
real operational efficiency gains. From
predicting lot cycle time with a gradient
boosting model to utilizing reinforcement
learning for optimizing dispatching parameter
values and scheduling constraints, simulation is
empowering SmartFactory AI Productivity.
Data Driven Digital Twin – Benefits and
Advantages in Real-time Systems
Hosni Adra (CreateASoft, Inc)
Abstract
Abstract
The term "digital twin" is akin to a chameleon
in the industry, adopting various meanings and
causing widespread confusion. In this
presentation, we embark on a mission to
demystify digital twins, categorize their
diverse implementations, explore the realm of
simulations, and unveil the myriad uses of this
transformative technology. We explore the
differences and benefits of each type with
special emphasis on data-driven digital twins
and their integration with AI (Artificial
Intelligence), ML (Machine Learning) and DL
(Deep Learning) technologies.
www.createasoft.com
Vendor Session · Vendor
Implementing Simulation Projects
Chair: John Shortle (George Mason University)
Overcoming Real-world Challenges on Simulation
Projects
Saurabh Parakh, Amy Greer, and Yusuke Legard (MOSIMTEC,
LLC)
Abstract
Abstract
MOSIMTEC expertly guides clients – from
pharma to farming, from climate change to change
management – through simulation modeling
so they get the MOST knowledge, the MOST
insight, and the MOST intelligent answers to
Future Proof their Business. At this vendor
track presentation, MOSIMTEC consultants will be
sharing stories from implementing commercial
simulation projects, along with tips for
addressing real world challenges related to
project management, stakeholder buy-in, project
deadlines, and data scarcity.
Poster · Poster
Poster Track Lightning Presentations
Chair: Zeyu Zheng (University of California, Berkeley);
María Julia Blas (INGAR CONICET UTN)
Using Narratives to Facilitate Public Acceptance of
Policies through Agent-Based Simulations
Yusuke Goto (Shibaura Institute of Technology)
Abstract
Abstract
In this paper, we introduce a conceptual
framework of policy communication that is
propelled by narratives generated via
agent-based simulations. The framework
demonstrates that public acceptance of polices
is contingent upon the interplay between
generated narratives and the stakeholders who
receive them. Moreover, it illustrates a model
that employs narratives to facilitate public
acceptance of policies through agent-based
simulations. Drawing on the proposed framework,
we identify the following three challenges
encountered in policy communication that is
driven by narratives generated through
agent-based simulations: developing a
methodology of narrative design and
visualization, identifying factors that
influence public acceptance of policies, and
providing the assurance of accountability as
justified narratives.
pdf
Digital Twin Readiness Assessment: Case Study at a
Printing Company
Jānis Grabis (Riga Technical University)
Abstract
Abstract
Digital twins provide a way to control various
manufacturing processes. To justify their
implementation investment, a systematic
readiness assessment is conducted at a printing
company. The assessment highlights readiness
gaps and provides basis for further
implementation of digital twin technology. Three
implementation scenarios are elaborated and
evaluated jointly with the company’s
representatives. A digital twin solution for
optimization of the folding process to improve
delivery time estimation is selected for further
implementation.
pdf
Constructing an ABM to Enhance Residents' Conviction
Regarding the Effectiveness of Town Development
Measures
Ibu Ueno and Shingo Takahashi (Waseda University)
Abstract
Abstract
When evaluating town development measures,
social simulations have been attempted to be
employed. In recent years, it is essential to
involve diverse stakeholders in the modeling
process and feedback of simulation results. This
paper aims to construct a method using Gaming
Simulation (GS) to allow participants to
experience an Agent-Based Model (ABM),
comprehend the model, and gain a sense of
convincing from the simulation results.
pdf
Integrated Modeling and Optimization of Spare Part
Logistic Operations and Condition-based Maintenance
Policies in a System of Geographically Distributed
Assets
Po-Han Wang and Dragan Djurdjanovic (The University of
Texas at Austin)
Abstract
Abstract
This study presents joint optimization of Spare
Parts Logistics (SPL) operations with
condition-based maintenance (CBM) policies in a
system of geographically distributed assets,
each consisting of multiple degrading parts. The
model considers facility location selection,
network connectivity design, inventory levels
for replenishment triggering, and CBM policies
that minimize overall system operating costs.
The solution is implemented as a sequential
model consisting of two stages: the initial
stage utilizes mathematical programming for
facility location selection and network design.
It is followed by a simulation-based method
using Continuous Time Markov Chain to model
degradation of spare parts and link it with
inventory managements. Additionally, the
maintenance operations model includes
opportunistic maintenance, which enables further
reduction of operating costs. Overall, the newly
proposed approach addresses scale limitations
and overly restrictive simplifications of
previously published models, which enables a
more comprehensive operational decision-making.
pdf
Potential Impact of a Diagnostic Test for Detecting
Prepatent Guinea Worm Infections in Dogs
Hannah Smalley and Pinar Keskinocak (Georgia Institute
of Technology); Julie Swann (North Carolina State
University); Christopher Hanna (Global Project Partners,
LLC); and Adam Weiss (The Carter Center)
Abstract
Abstract
Chad has seen a considerable reduction in cases
of Guinea worm disease (or dracunculiasis) in
domestic dogs in recent years but accelerating
elimination of the disease may require
additional tools. We investigate the potential
benefits of a hypothetical diagnostic test
capable of detecting pre-patent infections in
dogs. We adapted an agent-based simulation model
for analyzing disease transmission to examine
the interaction of multiple test factors
including sensitivity and specificity, infection
detection timing, dog selection, and tethering
compliance behaviors. We find that a diagnostic
test could be successful in combination with
existing interventions, and elimination can be
achieved within two years with 80% or higher
test sensitivity, 90% or higher specificity,
systematic testing of each dog biannually, and
long-term tethering of test-positive dogs. Due
to the long incubation period (10-14 months) and
lack of treatment, the testing rollout and
response of dog owners are critical to the
benefits of the test.
pdf
A Framework for Dynamic Control of Combat Support
Exercises
Sean McCarty (Air Force Institute of Technology)
Abstract
Abstract
Future armed conflict will be characterized by
surprise as adversaries innovate and evolve.
Current exercises provide inadequate
opportunities for combat support forces to
improvise. This research proposes a framework
for human-in-the-loop control of exercises using
a graph network for modeling combined with
topological analysis and modifications to the
zero one scheduling formulation. This framework
is assessed using the United States Air Force
Silver Flag exercise as a case study with
promising results.
pdf
Information Diffusion Model of SNS and Visualization
Method
Kazumi Sekiguchi and Masakazu Furuichi (Nihon
University)
Abstract
Abstract
The dissemination of social media has led to the
explosion of fake news, other misinformation and
disinformation, which significantly impacts
society. They are sometimes based on information
transmission by individuals, groups, and
organizations. In order to analyze the influence
of information diffusion, it is necessary not
only to visualize the spread from a bird's eye
view but also to examine the characteristics of
local information propagation and the impact of
the behavior. In this study, we developed a
multi-agent information diffusion model of
social networking service (SNS). We investigated
a visualization method that simultaneously
grasps the local information diffusion by
individuals and the overarching information
spread by multiple user clusters. This method
facilitates the recognition of the information
diffusion within a group and the final dispersal
status in addition to the condition of
information dissemination by each individual.
pdf
Using a Discrete Event Simulation to Improve Check-in
Operations at the Port of Dover
Siti Fariya (University of Kent, The Port of Dover);
Kathy Kotiadis (University of Kent); Timothy van Vugt
(The Port of Dover); and Jesse O'Hanley (University of
Kent)
Abstract
Abstract
This paper showcases our use of discrete event
simulation (DES) to enhance check-in operations
at the Port of Dover (PoD). PoD is the busiest
international ferry port in the UK and since the
UK left the European Union, the port has
experienced increased processing times and
considerable delays in passenger check-in. Three
independent ferry operators run individual
check-in systems for freight and tourist
vehicles, leading to efficiency challenges,
notably prolonged queuing times and limited
throughput. Our study investigates two
alternatives: a common check-in booth for all
operators and vehicle types, and a system that
retains operator-specific booths but merges the
process for all traffic types. We aim to
identify an improved operational model that
reduces queue times and to explore a range of
solutions that could improve check-in operations
at the Port of Dover, which not only make the
check-in process more efficient but also
significantly reduces queuing times.
pdf
Development and Application of the One-Stop Flow
Analysis Framework Enabling Rapid Digital
Engineering
Kengo Asada, Yuichi Matsuo, and Kozo Fujii (Tokyo
University of Science)
Abstract
Abstract
This paper proposes a one-stop simulation
framework from point cloud acquisition through
flow analysis. Conventional flow analysis starts
with computer-aided design (CAD) software to
define the object shape and any mesh generator
to build computational grids. However, CAD data
of old buildings and rooms, including furniture,
is hardly available. Thus, CAD data creation,
which takes a lot of time, is required when
conducting flow simulations of existing
buildings first. The present study illustrates a
simplified flow analysis procedure, which
reduces this lead time by defining the object
shape with point clouds and using a
Cartesian-based flow solver. The proposed
framework simplifies the design of heating,
ventilation, and air conditioning (HVAC) and
could improve its existing process and quality.
pdf
Stochastically Constrained Level Set Approximation
Via Probabilistic Branch and Bound
Hao Huang (Yuan Ze University), Shing Chih Tsai
(National Cheng Kung University), and Chuljin Park
(Hanyang University)
Abstract
Abstract
This paper investigates a simulation
optimization problem with both stochastic
objective and constraint functions with a
discrete solution space. Our objective is to
identify a set of near-optimal solutions within
a specific quantile, such as the top 10%. To
achieve this goal, we first employs a
probabilistic branch-and-bound algorithm to find
a level set of solutions. Then, we combine a
penalty function approach with the probabilistic
branch-and-bound algorithm to handle
stochastically constrained problems. Both
convergence analysis and experimental results
are provided that demonstrate the superior
efficiency of our proposed approaches over
existing methods.
pdf
A Standardized Method for Building Simulation-based
Decision Support Systems Using High Level
Architecture
Rana Ead, Yasser Mohamed, and Simaan AbouRizk
(University of Alberta)
Abstract
Abstract
This research develops a standardized Federation
Object Model (FOM) for Simulation-Based
Decision-Support Systems (SB-DSS) in
construction. SB-DSS are vital for tackling
project complexities, but their development
requires considerable time and expertise,
leading to underdeveloped systems and limited
adoption. To address this, the study adopts
High-Level Architecture (HLA) standards,
integrating autonomous simulations into a single
distributed simulation. The FOM includes object
classes, interactions, and datatype definitions,
enabling efficient communication among
federates. The initial FOM version was
successfully tested with five federates,
demonstrating its effectiveness. This
standardized FOM promotes simulation
reusability, interoperability, and data-driven
decision-making, ultimately enhancing
construction project execution and
competitiveness.
pdf
The Growth of Generative AI: Hype, Harm, and
Control
Timothy Clancy (Dialectic Simulations); Asmeret Naugle
(Sandia National Laboratories); and Ignacio J.
Martinez-Moyano (Argonne National Laboratory, University
of Chicago)
Abstract
Abstract
The hype-harm-control model investigates the
societal impact of generative artificial
intelligence (AI), given its growth, alignment
with societal values, and controls. This system
dynamics model was used to simulate the dynamics
and impacts of generative AI over a 10-year time
horizon. As the generative AI grows, hype and
use increase, leading to both societal benefit
and societal harm. This analysis found that
while the balance of hype and societal harm
determines the controls put on AI development
and use, early societal harm creates a strong
incentive to implement societal controls that
limit the growth of generative AI overall.
pdf
A Virtual Training System Using Digital Twins Based
on Discrete Event System Formalism
JinWoo Kim, GyuSik Ham, Sooyoung Jang, and Changbeom
Choi (Hanbat National University)
Abstract
Abstract
With the advancement of technology in education
and training, it has become commonplace to
conduct virtual rather than physical training to
save time and money. In addition, various
training hardware and software have been
proposed to give immersive experiences to
trainees to enhance the training effects in
various domains. The training system can be
regarded as a digital twin system, which
collects data from the trainee, analyzes the
data in the cyber world, and gives proper
feedback to the trainee. This research proposes
a virtual training system using digital twins
based on discrete event system formalism.
Especially, we focus on developing a
cost-effective digital twin and helping the
trainer to develop an evaluation system by
composing models. The training system utilizes
the webcam to collect skeleton data from the
trainee and evaluate the data by composing
discrete event system models.
pdf
Development of Production Digital Twin in
Manufacturing Using Fischertechnik Factory Model
Yuichi Matsuo, Kengo Asada, and Kozo Fujii (Tokyo
University of Science)
Abstract
Abstract
Recently, there have been more opportunities to
see and hear the term Digital Twin (DT) in
various situations. However, the reality is that
only the concept of DT precedes and that there
is a lack of places and materials to absorb the
DT content and its implementation. This paper
presents a case study at Tokyo University of
Science to develop the Production Digital Twin
in manufacturing by using Fischertechnik factory
model and Matlab/Simulink software tool. DT can
support not only the education in universities
but also human resource development in
manufacturing industries through the study and
practice concerning production line
optimization, virtual commissioning,
cyber-physical system implementation, real-time
monitoring of production data, and furthermore
lead the innovation in manufacturing in Japan.
pdf
Optimal Computing Budget Allocation for Monte Carlo
Tree Search in Othello
Daniel Qiu (Thomas Jefferson High School) and Jie Xu
(George Mason University)
Abstract
Abstract
Upper Confidence bounds applied to Trees (UCT)
is the most popular tree policy for Monte Carlo
Tree Search (MCTS). However, UCT focuses on
minimizing cumulative regret rather than
maximizing the Probability of Correct Selection
(PCS) of the best action, which is often
preferred in game engines. To address this, we
examine an Optimal Computing Budget Allocation
(OCBA) tree policy that provides a rigorous way
for maximizing the PCS rather than minimizing
regret. MCTS-OCBA has been shown to work well
with simple games such as Tic-Tac-Toe, where the
search space is small enough to simulate
through, but not unsolved games such as Othello
or Go. We report numerical results showing that
MCTS-OCBA performs better in Othello than
MCTS-UCT and thus demonstrate OCBA is a more
efficient tree policy for MCTS for game engines.
pdf
An Efficient Simulation-Based Optimization Algorithm
for a Crane Scheduling Problem in a Steelmaking
Shop
Woo-Jin Shin and Hyun-Jung Kim (Korea Advanced Institute
of Science and Technology)
Abstract
Abstract
This study addresses a crane scheduling problem
in a steelmaking shop, where cranes are
responsible for transporting ladles with molten
steel between machines. To meet production
schedules, the coordination between cranes and
machines is crucial, performing the
transportation of ladles at appropriate times.
Also, multiple cranes share a common track,
interference between them must be avoided. To
address this problem, we propose an efficient
algorithm based on iterative simulations.
Several dominance rules are developed to reduce
the solution space and accelerate the
convergence of the algorithm. Experimental
results show that our approach can derive
high-quality solutions within a short time.
pdf
Simulating Job Replication Versus Its Energy
Usage
Vladimir Marbukh and Brian Cloteaux (NIST)
Abstract
Abstract
Due to the proliferation of computers in all
aspects of our lives, the energy and ecological
impacts of computing are becoming increasing
important. Some of the transformative algorithms
of recent years generate huge amounts of carbon
dioxide, potentially damaging the environment.
We have developed a set of simulations for
understanding the trade-offs between distributed
computing and its carbon impact. We briefly
describe our current work and our future
research aiming at finding practical algorithmic
solutions.
pdf
Bayesian Subset Selection for Near-Optimal
Systems
Javier Gatica (Pontificia Universidad Católica de
Chile) and Jinbo Zhao and David J. Eckman (Texas A&M
University)
Abstract
Abstract
We study the ranking-and-selection problem of
selecting a subset of simulated systems that
with high probability contains a system with
near-optimal performance. The posterior
probability that at least one system in a given
subset is near optimal - referred to as the
posterior probability of good inclusion (pPGI) -
can be expressed in terms of a sum of
one-dimensional integrals and computed via
numerical integration. Still, enumerating all
possible subsets and computing their associated
pPGI is impractical for large problem instances,
thus we explore approximate solution methods. In
particular, we investigate a greedy algorithm
that builds a subset by iteratively adding the
system that increases the pPGI the most.
pdf
An Integrated Framework for Efficient Wireless
Coverage Mapping Using Ray Tracing Acceleration
Hieu Le, Jian Tao, and Hernan Santos (Texas A&M)
Abstract
Abstract
Evaluation of channel properties is one of the
most important aspects in wireless
communications. Ray tracing simulations have
been widely used to estimate channel
characteristics. In this poster, we put together
many aspects of ray tracing techniques and
signal estimation methods to build a coverage
map. Acceleration structures for ray tracing are
created to drastically reduce the computational
time of the traversal of the ray-primitive
intersections. Moreover, electromagnetics and
wireless communications theories are studied to
accurately estimate signal strength at an
arbitrary point in the predefined area of the
coverage map.
pdf
Track Coordinator - Ph.D. Colloquium: Anatoli Djanatliev (University of Erlangen-Nuremberg),
Siyang Gao (City University of Hong Kong), Cristina
Ruiz-Martín (Carleton University), Eunhye Song (Georgia
Institute of Technology)
PhD Colloquium · PhD Colloquium
PhD Colloquium Keynote: Methods and Applications or
Applications and Methods?
Chair: Siyang Gao (City University of Hong Kong)
Methods and Applications or Applications and
Methods?
Stephen Chick (INSEAD)
Abstract
Abstract
Stochastic simulation is a powerful framework
for supporting decision makers in a broad range
of applications. Its methods draw upon applied
probability, system dynamics, statistics,
computing, and other fields. Simulation methods
are interesting in and of themselves, including
uncertainty modelling, stochastic optimization,
the valuation of uncertainty, efficiency
improvement, and the modelling of complex system
behavior that might be hard to analyze through
closed-form analysis. Applications may sometimes
have standard approaches to support the analysis
to inform a decision maker, but decision makers
may also have criteria that are not reflected
fully in a simulation model. And sometimes new
applications give rise to very interesting
structures that call for further analysis. In
this talk, we discuss the feedback loop between
methods development that allow new applications
to be addressed, and new applications that give
rise to new methods.
pdf
PhD Colloquium · PhD Colloquium
PhD Colloquium Session A1
Chair: Siyang Gao (City University of Hong Kong)
Reusing Historical Observations in Natural Policy
Gradient
Yifan Lin (Georgia Institute of Technology)
Abstract
Abstract
Reinforcement learning provides a framework for
learning-based control, whose success largely
depends on the amount of data it can utilize.
The efficient utilization of historical samples
obtained from previous iterations is essential
for expediting policy optimization. Empirical
evidence has shown that offline variants of
policy gradient methods based on importance
sampling work well. However, existing literature
often neglect the interdependence between
observations from different iterations, and the
good empirical performance lacks a rigorous
theoretical justification. In this paper, we
study an offline variant of the natural policy
gradient method with reusing historical
observations. We show that the biases of the
proposed estimators of Fisher information matrix
and gradient are asymptotically negligible, and
reusing historical observations reduces the
conditional variance of the gradient estimator.
The proposed algorithm and convergence analysis
could be further applied to popular policy
optimization algorithms such as trust region
policy optimization.
pdf
Dispatching in Real Frontend Fabs With Industrial
Grade Discrete-Event Simulations by Deep Reinforcement
Learning With Evolution Strategies
Patrick Stöckermann (Infineon Technologies AG)
Abstract
Abstract
Scheduling is a fundamental task in each
production facility with implications on the
overall efficiency of the facility. While
classic job-shop scheduling problems become
intractable when the number of machines and jobs
increases, the problem gets even more complex in
the context of semiconductor manufacturing,
where flexible production control and stochastic
event handling are required. In this paper, we
propose a Deep Reinforcement Learning approach
for lot dispatching to minimize the Flow Factor
(FF) of a digital twin of a real-world,
stochastic, large-scale semiconductor
manufacturing facility. We present the first
application of Reinforcement Learning (RL) to an
industrial grade semiconductor manufacturing
scenario of that size. Our approach leverages a
self-attention mechanism to learn an effective
dispatching policy for the manufacturing
facility and is able to reduce the global FF of
the fab.
pdf
Cutting through the Noise: Machine Learning Proxies
for High Dimensional Nested Simulation
Xintong Li (University of Waterloo)
Abstract
Abstract
Deep learning models have gained great success
in many applications, but their adoption in
financial and actuarial applications have been
received by regulators with trepidation. The
lack of transparency and interpretability of
these models raises skepticism about their
resilience and reliability, which are important
factors for financial stability and insurance
benefit fulfillment. In this study, we use
stochastic simulation as a data generator to
examine deep learning models under controlled
settings. Our study shows interesting findings
in fundamental questions like “What do
deep learning models learn from noisy
data?” and “How well do they learn
from noisy data?”. Based on our findings,
we propose an efficient nested simulation
procedure that uses deep learning models as
proxies to estimate tail risk measures of
hedging errors for variable annuities. The
proposed procedure uses deep learning to
concentrate simulation budget on tail scenarios
while maintaining transparency in estimation.
pdf
Solving Deadlock Situations in Intralogistics with
Reinforcement Learning
Marcel Müller (Otto von Guericke University
Magdeburg)
Abstract
Abstract
Intralogistics faces challenges from global
disruptions such as the COVID-19 pandemic,
geopolitical tensions, and wars, emphasizing the
need for increased flexibility of logistic
systems. Compounded by staff shortages in
industrial countries, automation continues to
rise, evidenced by the growing number of
industrial robots. This rise in automation
demands enhanced capabilities for intralogistic
systems, including handling deadlocks. This
research delves into the potential of
reinforcement learning (RL) in addressing
deadlocks, aiming to increase the efficiency,
flexibility, and resilience of intralogistic
systems.
pdf
Feature Selection in Generalized Linear models via
the Lasso: To Scale or Not to Scale?
Anant Mathur (University of New South Wales)
Abstract
Abstract
The Lasso regression is a popular regularization
method for feature selection in statistics.
Prior to computing the Lasso estimator in both
linear and generalized linear models, it is
common to conduct a preliminary rescaling of the
feature matrix to ensure that all the features
are standardized. Without this standardization,
it is argued, the Lasso estimate will,
unfortunately, depend on the units used to
measure the features. We propose a new type of
iterative rescaling of the features in the
context of generalized linear models. Whilst
existing Lasso algorithms perform a single
scaling as a preprocessing step, the proposed
rescaling is applied iteratively throughout the
Lasso computation until convergence. We provide
numerical examples, with both real and simulated
data, illustrating that the proposed iterative
rescaling can significantly improve the
statistical performance of the Lasso estimator
without incurring any significant additional
computational cost.
pdf
Hyperheuristic Optimization as Decision Suport for
the Operative Service Delivery Planning in the Context
of Product-Service Systems
Enes Alp (Ruhr-Universität Bochum)
Abstract
Abstract
In the pursuit of differentiation and revenue
increment, numerous manufacturing enterprises
are innovating their business models through the
introduction of Product-Service Systems (PSS).
In these business models, the efficacy of
service delivery assumes paramount significance,
leading to challenges in the planning. The
objective of this PhD project is the
conceptualization and development of a decision
support system for operative service delivery
planning within the context of PSS.
pdf
System Simulation and Machine Learning-Based
Maintenance Optimization for an Inland Waterway
Transportation System
Maryam Aghamohammadghasem (University of Arkansas)
Abstract
Abstract
To keep an inland waterway transportation system
(IWTS) up and running, the interconnected
infrastructure, including lock and dam systems,
must remain in good operating condition.
However, unexpected disruptions often occur,
causing significant transportation delays and
economic losses. To evaluate the impacts of such
disruptions, a Python-enhanced NetLogo
simulation tool is developed, in which extreme
natural events are considered and characterized
by a spatiotemporal model. With this tool,
optimal maintenance strategies that maximize the
total cargo throughput of the IWTS are
determined via deep reinforcement learning. A
case study of the lower Mississippi River system
and the McClellan-Kerr Arkansas River Navigation
System is conducted to illustrate the capability
of the developed simulation and machine
learning-based method for IWTS maintenance
optimization.
pdf
Strengthening Emergency Department Resilience:
Simulation-Based Surge Management
Eman Ouda (Khalifa University)
Abstract
Abstract
This study aims to improve the resilience of the
Emergency Department (ED) to handle demand
surges through a combination of Discrete Event
Simulation (DES) and resilience assessment
techniques. By evaluating resistance and
recoverability components, the analysis examines
the resilience of the ED, patient flow dynamics,
and resource requirements. A dedicated
simulation model is developed to uncover how the
ED performs during normal operations and demand
surges, exploring the effects of alterations and
additional resources on resilience using the
resilience triangle framework for optimized
resource allocation. This research improves our
understanding of ED resilience, paving the way
for further investigations into performance
improvement during demand spikes, and the
results suggest new patient flow strategies to
enhance resilience.
pdf
Expediting Stochastic Derivative-free
Optimization
Yunsoo Ha (North Carolina State University)
Abstract
Abstract
Adaptive sampling-based trust-region
optimization has emerged as an efficient solver
for nonlinear and nonconvex problems in noisy
derivative-free environments. This class of
algorithms proceeds by iteratively constructing
local models on objective function estimates
that use a carefully chosen number of calls to
the stochastic oracle. To expedite this class of
algorithms, we introduce four refinements: (a)
quadratic local models with diagonal Hessian,
(b) a direct search, (c) a reusing strategy, and
(d) common random numbers. We have substantiated
that the introduced refinements enable the
algorithm to achieve accelerated convergence,
both in numerical simulations and in theoretical
analyses.
pdf
Conditional Importance Sampling for Convex Rare-Event
Sets
Lewen Zheng (The Chinese University of Hong Kong)
Abstract
Abstract
This paper studies the efficient estimation of
expectations defined on convex rare-event sets
using importance sampling. Classical importance
sampling methods often neglect the geometry of
the target set, resulting in a significant
number of samples falling outside the target
set. This can lead to an increase in the
relative error of the estimator as the target
event becomes rarer. To address this issue, we
develop a conditional importance sampling scheme
that achieves bounded relative error by changing
the sampling distribution to ensure that a
majority of samples lie inside the target set.
The proposed method is easy to implement and
significantly outperforms the existing
approaches in various numerical experiments.
pdf
Efficient Input Uncertainty Quantification for
Regenerative Simulation
Linyun He (Georgia Institute of Technology)
Abstract
Abstract
The initial bias in steady-state simulation can
be characterized as the bias of a ratio
estimator if the simulation model has a
regenerative structure. This work tackles input
uncertainty quantification for a regenerative
simulation model when its input distributions
are estimated from finite data. Our aim is to
construct a bootstrap-based confidence interval
(CI) for the true simulation output mean
performance that provides a correct coverage
with significantly less computational cost than
the traditional methods. Exploiting the
regenerative structure, we propose a k-nearest
neighbor (kNN) ratio estimator for the
steady-state performance measure at each set of
bootstrapped input models and construct a
bootstrap CI from the computed estimators.
Asymptotically optimal choices for k and
bootstrap sample size are discussed. We further
improve the CI by combining the kNN and
likelihood ratio methods. We empirically compare
the efficiency of the proposed estimators with
the standard estimator using queueing examples.
pdf
PhD Colloquium · PhD Colloquium
PhD Colloquium Session B1
Chair: Enlu Zhou (Georgia Institute of Technology)
Shapley-Shubik Explanations of Feature
Importance
Gayane Grigoryan (Old Dominion University)
Abstract
Abstract
Explaining feature importance values in models
is a central concern in the realm of explainable
artificial intelligence (XAI). While the Shapley
value has garnered significant attention, there
are other promising cooperative game theory
(CGT) solutions, such as the Shapley-Shubik,
that have not received the same amount of
attention. In this paper, we explore the
potential of the Shapley-Shubik method for
elucidating feature importance values in
simulations and machine learning models.
pdf
Breaking the Monotony: Promoting Diversity in
High-dimensional Batch Surrogate Optimization
Nazanin Nezami (University of Illinois Chicago)
Abstract
Abstract
In the realm of high-dimensional batch surrogate
optimization, the challenge of fostering
diversity while pursuing optimal solutions is
paramount. Traditional approaches often result
in monotonous exploration patterns, hindering
the discovery of promising solutions and
reducing efficiency. This thesis introduces
innovative strategies, prioritizing diversity
and exploration to break free from the monotony
inherent in such tasks. Additionally, the thesis
explores the impact of algorithmic
hyperparameters on the exploration-exploitation
trade-off to establish a robust framework. The
"Elevating Exploration" strategies prioritize
diverse candidate batch generation through
adaptive sampling techniques, infusing vitality
into the optimization process and effectively
exploring uncharted regions of the search space.
Empirical validation on optimization problems
confirms their effectiveness in navigating
complex landscapes. Beyond theoretical
advancements and empirical validation, this
thesis lays the groundwork for a paradigm shift,
empowering practitioners to approach complex
optimization challenges with renewed vigor and
precision by promoting diversity and elevated
exploration.
pdf
A Calibration Model for Bot-Like Behaviors in
Agent-Based Anagram Game Simulation
Xueying Liu (Virginia Polytechnic Institute and State
University)
Abstract
Abstract
Experiments that are games played among a
network of players are widely used to study
human behavior. Furthermore, bots or intelligent
systems can be used in these games to produce
contexts that elicit particular types of human
responses. Bot behaviors could be specified
solely based on experimental data. In this work,
we take a different perspective, called the
Probability Calibration (PC) approach, to
simulate networked group anagram games with
certain players having bot-like behaviors. The
proposed method starts with data-driven models
and calibrates in principled ways the parameters
that alter player behaviors. It can alter the
performance of each type of agent (e.g., bot)
action, per player, in group anagram games.
Further, statistical methods are used to test
whether the PC models produce results that are
statistically different from those of the
original models. Case studies demonstrate the
merits of the proposed method.
pdf
An Additive Decomposition for Discrete Simulation
Optimization Using Gaussian Markov Random Fields
Harun Avci (Northwestern University)
Abstract
Abstract
We consider a discrete optimization via
simulation problem with high-dimensional,
integer-ordered decision variables. One of the
methods to solve such a problem is Bayesian
optimization (BO). Although BO can provide rapid
solution improvement within a tight
computational budget, the posterior update
creates a significant computational overhead for
large-scale problems. To overcome this
challenge, we propose an algorithm that
decomposes the prior distribution into an
additive form as an approximation. Despite this
approximation, our numerical analysis reveals
that the algorithm can obtain rapid improvement.
pdf
Simulation-Based Resolution of Deadlocks in Automated
Guided Vehicles using Multi-Agent Reinforcement
Learning in Intralogistic
Mustafa Jelibaghu (Technische Hochschule Aschaffenburg)
Abstract
Abstract
This abstract presents a novel approach to
address deadlock scenarios in Automated Guided
Vehicle (AGV) systems utilizing Multi-Agent
Reinforcement Learning (MARL) within a
simulation framework. Deadlocks, frequently
encountered in AGV operations, impede system
efficiency. Traditional resolution methods can
be complex and suboptimal. This study proposes a
MARL-based solution, capitalizing on the
decentralized decision-making prowess of agents
to navigate AGVs out of deadlocks. A simulated
environment accurately mimics real-world AGV
dynamics, enabling agents to learn deadlock
resolution strategies through trial and error.
The results demonstrate that the MARL approach
significantly mitigates deadlocks, enhancing
overall system performance. This research
contributes to the synergy between simulation,
multi-agent systems, and reinforcement learning,
offering an efficient deadlock resolution
paradigm with potential real-world AGV
application.
pdf
How People's Beliefs Determine Society's Disease
Resistence
Geonsik Yu (Purdue University)
Abstract
Abstract
Protecting public health from infectious
diseases often relies on people’s beliefs,
especially when self-care interventions are the
only viable tools for disease mitigation. In
this study, we focus on how public opinion and
its surrounding factors affect disease spread.
We propose an agent-based simulation framework
that incorporates opinion dynamics with an
epidemic model. We demonstrate that the model
can replicate the patterns of opinion and
disease dynamics observed in 15 countries during
the COVID-19 pandemic. Based on the fitted
models, we examine how various opinion-related
factors influence the consequences of the
epidemic. For our explanatory model, we employ
the random forest algorithm and assess the
permutation importance of these factors. Partial
dependence plots are also investigated to
observe the direction of the factors’
impacts. Our results reveal that the initial
level of public opinion on preventive
interventions has a dominant impact on the total
count of new infections.
pdf
Marine Ecosystem Services Disruption and Social
Violence
Rafael Hurtado (University of Central Florida)
Abstract
Abstract
Marine ecosystem services support coastal
communities by offering essential sustenance,
protection, and cultural benefits. However, the
global decline in these ecosystems has disrupted
these services, impacting the communities
reliant on them. The Archipelago of San Andres
Providencia and Santa Catalina (ASAPSC) in the
Colombian Caribbean exemplifies this decline,
coinciding with a rise in violent crimes and
homicide rates. This study employs an
agent-based model (ABM) to simulate the ASAPSC
case and examine the potential links between
marine ecosystem depletion and the escalation of
social violence. The simulation results suggest
a link between disruption of ecosystem services
and social violence and set the stage for future
empirical research in environmental security.
pdf
Focused Flexibility in Workforce Scheduling
Johanna Wiesflecker (The University of Edinburgh)
Abstract
Abstract
In many industries, work schedules often go
through lengthy approval processes. Once
approved, schedules may be locked in for long
time horizons (e.g., months). Working
regulations allow for partial changes
(re-rostering) in a small number of extreme
cases. Most other disruptions (staff
absenteeism, change in demand pattern, etc.)
will be dealt with only at huge costs. Injecting
flexibility (affordable, case-specific
re-rostering options) from the very outset
(schedule approval stage) can foster schedule
robustness at lower costs. This work shows how
to jointly adopt simulation and Adaptive Large
Neighborhood Search to do just that. At each
iteration of the proposed Sim-ALNS algorithm,
ALNS selects a combination of levels of
flexibility (within guidelines set by the
organization), while a Monte-Carlo simulation
scheme evaluates the performance of the
solution. Experiments in an airport security
setting show that the method leads to a 27%
decrease in average weekly re-rostering cost.
pdf
A Combined Simulation Optimization Framework to
Improve Logistics Processes in the Production of
Specialty Chemicals
Maximilian Kiefer (TU Dortmund University, Graduate
School of Logistics / Institute of Transport Logistics)
Abstract
Abstract
The chemical industry is experiencing shifts in
market conditions, leading to an increasing need
for fast and individual-engineered chemicals.
This trend causes a change from mass production
to the production of small, demand-driven
quantities. This results in various variants and
container types, requiring efficient logistics
management to handle the complexity. A
methodical framework should enable the user to
fulfill the specific requirements of the
logistics processes and thus make the complex
planning manageable. In particular, supply and
disposal methods and container management are
under special consideration. Therefore, a
simulation and optimization framework is
developed. First, the motivation of the research
project is presented. Afterward, a framework for
planning logistics processes is designed,
consisting of data preparation, mathematical
optimization, and simulation.
pdf
PhD Colloquium · PhD Colloquium
PhD Colloquium Session A2
Chair: Siyang Gao (City University of Hong Kong)
Computer Simulation-based Templates for Lean
Implementation in Small and Medium Construction
Enterprises
Prashanth Kumar Sreram (Indian Institute of Technology
Bombay, National Institute of Construction Management
and Research Hyderabad)
Abstract
Abstract
A country's economic advancement hinges on the
construction sector, but its growth is marred by
the global construction industry's chief
predicament: tangible and intangible waste. Lean
construction employs strategies such as Value
Stream Mapping (VSM), yielding crucial time and
cost savings. Presently, VSM's execution is
limited to static process representation,
segregating preparation, and assessment of
enhancement alternatives. In the era of
construction 4.0, embracing technological and
digital shifts is imperative, enhancing
performance via simulation. Hence, uniting Lean
Construction with Simulation becomes essential,
validating lean principles through simulation
models and aiding improved project
decision-making. Thus, research concentrates on
crafting VSM-based discrete event simulation
(DES) models tailored for small and medium
enterprises in the offsite construction realm.
The current focus is offsite construction, while
forthcoming research addresses complex
activities, refining simulation models as
valuable tools for industry practitioners.
pdf
Causal Dynamic Bayesian Networks for Simulation
Metamodeling
Pracheta Amaranath (University of Massachusetts Amherst)
Abstract
Abstract
A traditional metamodel for a discrete-event
simulation approximates a real-valued
performance measure as a function of the
input-parameter values. We introduce a novel
class of metamodels based on modular dynamic
Bayesian networks (MDBNs), a subclass of
probabilistic graphical models which can be used
to efficiently answer a rich class of
probabilistic and causal queries (PCQs). Such
queries represent the joint probability
distribution of the system state at multiple
time points, given observations of, and
interventions on, other state variables and
input parameters. This paper is a first
demonstration of how the extensive theory and
technology of causal graphical models can be
used to enhance simulation metamodeling. We
demonstrate this potential by showing how a
single MDBN for an M/M/1 queue can be learned
from simulation data and then be used to quickly
and accurately answer a variety of PCQs, most of
which are out-of-scope for existing metamodels.
pdf
Improving Buffer Storage Performance in Ceramic Tile
Industry Via Simulation
Marco Taccini (University of Modena and Reggio Emilia)
Abstract
Abstract
This study aims at identifying the best strategy
to temporarily store products within a buffer
area in an Italian ceramic tile company. The
storage policy is analyzed to maximize the
storage capacity, facilitate operators'
activities, and, consequently, improve the
warehouse logistics performance. A discrete
event simulation was conducted using Salabim, a
Python based open-source software, in order to
determine the best policy. We compare the
performance of the current storage policy, based
on technical production properties of products,
and a newly proposed one, based on products'
downstream destination. The results suggested
that the proposed strategy significantly
improves the performance of the buffer area
management. The approach can be applied to
different applications, contributing to the
literature on simulation-based decision-making
in material management. Furthermore, the study
provides a functional case study showing the
potential and achievable results of Salabim for
modeling complex systems.
pdf
Integrating AI and Simulation for Intelligent
Material Handling
Sriparvathi Shaji Bhattathiri (Rochester Institute of
Technology)
Abstract
Abstract
With the increasing integration of autonomous
mobile robots in warehouse facilities for
storage and retrieval, the need arises to make
intelligent dispatching decisions to maximize
operational efficiency and meet shipping
deadlines. The aim of this research is to enable
effective real-time, dispatching decisions
taking into consideration both travel distance
and due date. In particular, we develop a
reinforcement learning method for task selection
in a multi-agent warehouse environment. A Monte
Carlo simulation approach is used to train the
Artificial Intelligence model and assess its
capabilities and limitations. The performance of
the proposed model is compared with that of
rule-based task selection methods. The
preliminary experimental results indicate strong
potential in employing reinforcement learning
for real-time dispatch in warehouse
environments.
pdf
Model Predictive Control in Optimal Intervention of
Covid-19 with Mixed Epistemic-aleatoric
Uncertainty
Jinming Wan (Binghamton University)
Abstract
Abstract
Non-pharmaceutical interventions (NPI) have been
proven vital in the fight against the COVID-19
pandemic before the massive rollout of
vaccinations. Considering the inherent
epistemic-aleatoric uncertainty of parameters,
accurate simulation and modeling of the
interplay between the NPI and contagion dynamics
are critical to the optimal design of
intervention policies. We propose a modified
SIRD-MPC model that combines a modified
stochastic
Susceptible-Infected-Recovered-Deceased (SIRD)
compartment model with mixed epistemic-aleatoric
parameters and Model Predictive Control (MPC),
to develop robust NPI control policies to
contain the infection of the COVID-19 pandemic
with minimum economic impact.
pdf
Perishable Inventory Management: Human Milk Banking
Case Study
Marta Staff (University of Exeter)
Abstract
Abstract
Despite providing lifesaving donor human milk to
vulnerable premature infants, human milk banking
is greatly overlooked from an Operations
Research perspective, with yet to be explored
distinctive characteristics, offering attractive
prospects for Modelling and Simulation research.
The effective management of inventory, where
products have limited shelf life, adds to its
complexity. The commonly utilized newsvendor
model to study inventory decisions is unlikely
to capture the intricacies of items with
extended shelf lives. A milk donor typically
accumulates milk over time, resulting in the
donation of a “stash” consisting of
milk units with different expiry dates. The
decision of whether to treat it as a whole, or
split it, when the “stash” is
progressed out of the ingress inventory into
production, will affect the remaining shelf life
of the final product, but also the associated
production costs. Hence DES is being utilized to
investigate the cost-benefit analysis of batch
splitting.
pdf
Estimating Treatment Effects from Simulation Samples
of Population-scale Models
Abdulrahman Ahmed (University of Pittsburgh)
Abstract
Abstract
Large-scale models require an exhaustive amount
of computational power to simulate, especially
when there are multiple treatment conditions to
be evaluated across large geographical regions.
Therefore, developing an efficient method to
distribute computational resources efficiently
is essential for conducting large-scale
simulations. Agent-based modeling can generate
accurate simulation samples, and our goal is to
use them for estimating treatment effects to
optimize potential interventions with as few
simulation samples as possible. In this
abstract, I will show methods that perform
better than benchmarks by taking into account
the uncertainty in the estimation of treatment
effects dynamically and discuss our next steps
for improving them.
pdf
Adaptive Ranking and Selection Based Genetic
Algorithms For Data-driven Problems
Kimia Vahdat (North Carolina State University)
Abstract
Abstract
We present ARGA, the Adaptive Robust Genetic
Algorithm, for optimizing simulation problems
with binary variables affected by input
uncertainty and Monte Carlo noise. In this
method, a population evolves as more information
about the high-dimensional, stochastic problem
becomes available. ARGA conducts ranking and
selection with a debiasing mechanism of fitness
values using fast iterated bootstraps economized
with control variates. Debiasing reduces the
model risk due to input uncertainty bias,
leading to a more accurate ranking of designs.
Given the double loop of function evaluations,
we incorporate adaptive budget allocation
throughout the search only if the current
population's proximity to optimality signals the
need for a smaller standard error. In that case,
we allocate replications to the input model of
the design most responsible for risk. Empirical
results with a fixed optimization budget show
that ARGA obtains significantly better solutions
in feature selection problems across various
datasets.
pdf
Enhancing Parallel Large-Scale Ranking and Selection
Using Clustering Techniques
Zishi Zhang (Guanghua School of Management,Peking
University)
Abstract
Abstract
We explore the use of correlation-based
clustering techniques to enhance large-scale R&S
procedures under parallel computing environment.
Both theoretical analysis and numerical
experiments convincingly demonstrate that
clustering techniques can significantly improve
the sample efficiency of existing R&S methods.
pdf
Reliable Adaptive Stochastic Optimization with High
Probability Guarantees
Miaolan Xie (Cornell University)
Abstract
Abstract
To handle real-world data that is noisy, biased
and even corrupted, we consider a simple
adaptive framework for stochastic optimization
where the step size is adaptively adjusted
according to the algorithm's progress instead of
manual tuning or using a pre-specified sequence.
Function value, gradient and possibly Hessian
estimates are provided by probabilistic oracles
and can be biased and arbitrarily corrupted,
capturing multiple settings including expected
loss minimization in machine learning,
zeroth-order and low-precision optimization.
This framework is very general and encompasses
stochastic variants of line search,
quasi-Newton, cubic regularized Newton and SQP
methods for unconstrained and constrained
problems. Under reasonable conditions on the
oracles, we show high probability bounds on the
sample and iteration complexity of the
algorithms.
pdf
PhD Colloquium · PhD Colloquium
PhD Colloquium Session B2
Chair: Enlu Zhou (Georgia Institute of Technology)
Sustainability-Integrated Digital Framework for
Decision Making in Interior Construction Design
Rongxu Liu (University of Exeter)
Abstract
Abstract
The present study presents a novel digital tool
that is seamlessly integrated with cutting-edge
Industry 4.0 technologies. The primary objective
of this tool is to effectively cater to the
diverse requirements of stakeholders involved in
interior construction projects. This research
endeavor explores the various challenges faced
by stakeholders, examines the significance of
digital tools in facilitating the integration of
cutting-edge technologies, and assesses the
effectiveness of the proposed application in
improving project results. The anticipated
outcomes hold the potential to fundamentally
transform the landscape of construction project
management in the future. This transformation
will be achieved through the integration of
stakeholder requirements and the utilization of
cutting-edge technological advancements.
pdf
Dynamic Weapon Target Assignment via Simulation,
Reinforcement Learning and Graph Neural Network
Seung Heon Oh (Seoul National University)
Abstract
Abstract
DWTA (dynamic weapon target assignment problem)
is the important resource scheduling problem in
battlefield. In this paper, deep reinforcement
learning and graph neural network optimize the
performance of the decision making of DWTA. The
proposed method is evaluated experimentally for
some cases and compared with other heuristic
methods.
pdf
A Simulation Framework for Clearing Function-based
Release Date Optimization in a Material Requirements
Planned Planned Production System
Wolfgang Seiringer (University of Applied Science Upper
Austria)
Abstract
Abstract
In this research work a simulation framework is
developed helping to overcome the missing
capacity limitation of material requirements
planning (MRP) to obtain more reliable planning
results. Therefore, the concept of clearing
functions (CF) are integrated as constraints
into a mathematical optimization problem. When
using CF as capacity constraints it is possible
to identify how much of the current workload is
realistic to be processed on the shop floor of a
production. The CF based release dates will
replace the fixed planned lead time of MRP,
which is unable to handle capacity limitations.
To evaluate the performance of CF based release
date planning a comparison with standard MRP
using a simulation experiment is done. First
results show the potential of the CF approach,
but due to the complexity of the release
mechanism adjustments to the planning and
optimization component in the simulation are
necessary.
pdf
To What Extent Can Simulation Optimization be Used in
Wildlife Reserve Design?
Shengjie Zhou (Lancaster University)
Abstract
Abstract
Wildlife reserves serve as a critical tool for
conserving wildlife species. The design of such
reserves can be formulated as a simulation
optimization problem, with the objective of
minimizing conservation costs while satisfying
species survival constraints. Our research
explores this problem formulation and the
relevant solution methods, with a particular
focus on the Chance Constrained Selection of the
Best algorithm. We formulate the problem using a
deterministic objective function subject to a
probabilistic constraint. To estimate the
survival probability under various policies, we
have developed a Gray Wolf (Canis lupus) model
that simulates the wolves’ dispersal,
breeding, and death processes in discrete time
steps. Our poster presents three scenarios that
demonstrate the potential use of Simulation
Optimization techniques in wildlife
conservation.
pdf
Real-time Delay Prediction for Kidney Transplantation
System
Najiya Fatma (Indian Institute of Technology Delhi)
Abstract
Abstract
We present a combined simulation and machine
learning framework for predicting, at the time
of end-stage renal disease patient’s
registration on the kidney transplantation
waitlist, whether the patient will receive a
transplant before their health deteriorates. If
the patient is predicted to receive a
transplant, we predict their time on the
waitlist before receiving the transplant. We
accomplish this by developing a discrete-event
simulation model of the kidney transplantation
system using patient-related and organ
donor-related information. We use the validated
model to record clinical and operational
features for each patient at the time of their
registration, which is then used to train
machine learning algorithms to predict the
transplantation waitlist outcome, and, in turn,
the organ allocation time. Our approach is
suitable for generating real-time delay
predictions for complex queuing systems where
data regarding state of the queueing system that
can be used to train ML methods is not
maintained.
pdf
Epydemia: an Open-source Agent-based Model for
Infectious Disease Modeling
Sebastian Rodriguez Cartes (North Carolina State
University)
Abstract
Abstract
Agent-based models provide a flexible framework
for the modeling of infectious diseases. We
propose an open-source simulation framework,
EPyDEMIA, that allows modeling multiple diseases
infecting a population, implementing complex
agent behaviors, and different interventions.
The framework was designed as a discrete-event
simulator and was implemented using Python.
Infections throughout a population are driven
using a network of multiple independent layers.
We highlight the utility of our framework by
showcasing a two-disease outbreak example. The
proposed tool's modularity facilitates the
implementation of disease transmission models,
streamlining the analysis of the health impacts
of infections.
pdf
Developing a Bi-Level and Interoperable Framework for
Digital Twins: An Application For The Underground
Mining Industry
Mostafa DadkhahKalateh (Polytechnique Montréal)
Abstract
Abstract
The study presents an innovative modular,
technical, and bi-level Digital Twin
architecture, specifically designed for
underground mining systems. Aligned with
Industry 4.0 principles, it aspires to integrate
and enhance mining activities across the mining
value chain. Spanning its entire value chain,
the architecture considers lifecycle phases,
physical assets and operations in six functional
layers, addressing interoperability between the
IoT, data, and various models. This holistic
design facilitates remote control of underground
operations and provides flexibility to craft
decision tools tailored to individual
configurations. The focus is on merging
real-time data with decision tools to achieve a
granular system portrayal and facilitate
informed operational decisions. The architecture
adopts a service-oriented approach,
necessitating the partitioning of data and
decision models, ensuring a flexible, extensible
lower-level Fleet Management System using UML
methodologies. Ultimately, this architecture is
poised to revolutionize mining processes and
resiliency, driving operational efficiency,
safety and adaptability to new heights.
pdf
Towards a Hybrid Discrete Event Simulation
Agent-based Model for the Texas State Mental Hospital
System
Maria Tomasso (Texas State University)
Abstract
Abstract
State mental health hospitals provide a vital
service to individuals who pose a threat to
themselves or others. However, in recent years,
these facilities have struggled to meet demand,
resulting in a waitlist of over one thousand
patients. Despite legislative efforts to address
this issue, waitlist lengths persist and
continue to grow. This study employs a hybrid
discrete event simulation agent-based model
(DES-ABM), trained on publicly available
aggregate data, to model waitlists for state
mental health hospitals in Texas. Once trained,
the model enables projections of the impact of
various policy interventions and resource
allocation strategies on the waitlist. The model
successfully approximated waitlist lengths from
2020-2022, and we tested two interventions
involving the expansion of available beds,
recording their effects on the waitlists.
pdf
Significance of Traffic Loading for Evacuation and
Percolation-based Control Strategies
Ruqing Huang (The University of Tennessee, Knoxville)
Abstract
Abstract
This paper investigates the significance of
traffic loading rate for evacuation efficiency
through large-scale evacuation simulation on a
20*20 grid network, emphasizing the emergency
evacuation of the central 10*10 CBD area. There
exists an equilibrium between the loading flow
into the CBD and the exiting flow out of the
CBD, which simultaneously optimizes evacuation
efficiency. Loading can be excessive, over,
equilibrium, or under-loaded, with overloading
causing widespread jams and potential gridlocks.
Using percolation theory, we also proposed
several strategies that limit congestion spread
to the CBD's edge, achieving equilibrium with
optimal evacuee exit rates.
pdf
Assessing the Impact of Social Network Settings on
COVID-19 Transmission in Cruise Ships: An Agent-Based
Modeling Approach
Akane Fujimoto Wakabayashi (Georgia Institute of
Technology)
Abstract
Abstract
Cruise ship operations faced significant
disruptions during the COVID-19 pandemic. Close
quarters and dense populations of domestic and
international travelers are an environment where
viruses can spread easily. The cruise industry
and public health partners continue to develop
guidelines to control the spread of disease
within these settings. In this study, we
developed an agent-based model to simulate the
spread of COVID-19 in cruise ship environments.
The model considers various types of
interactions, including passenger-passenger,
passenger-crew, and crew-crew interactions
within networks and the cruise ship population.
We evaluated the impact of different social
network settings, such as group travel sizes,
intensity of interactions, and initial number of
infection seeds on the spread of disease. The
findings provide insights for public health
decision-makers and the modeling framework can
inform other modeling activities that rely on
similar data streams.
pdf
Plenary
In Memoriam
Chair: James Wilson (North Carolina State University)
In Memoriam: Peter D. Welch (1928‒2023)
James Wilson (North Carolina State University)
pdf
|