PlenaryOpening Plenary: Simulation for Disney Parks and Experiences Chair: Manuel D. Rossetti (University of Arkansas)
Simulation for Disney Parks and Experiences Brian Walters and Amy Sardeshmukh (Disney Experiences) and Frederick Zahrn (The Walt Disney Company) Abstract AbstractAt Disney, simulation is used to analyze a variety of stages and aspects of the guest’s vacation experience. Before the first guest rides a new theme park attraction, Disney works behind the scenes to build detailed simulation models that inform design decisions. In this presentation, we will discuss analytics utilized by Disney in the design, development, and maintenance of attractions and transportation systems - including component-level and system-level simulations of the dynamics of ride vehicles and animatronics. In a broader commercial context, we will also discuss the role of simulation in analytics applications supporting business decision making for Disney Experiences. pdf
PlenaryTitans of Simulation: In Praise of Small Models Chair: Susan R. Hunter (Purdue University)
In Praise of Small Models Sally Brailsford (University of Southampton) Abstract AbstractOver the course of my 35-year academic career, simulation models have gotten bigger and bigger, due to the exponential growth in computing power and the increased availability of large datasets. This trend has culminated in the current popularity of so-called "digital twins.'' The scientific definition of a digital twin is open to debate, but according to Wikipedia a digital twin is "a digital model of an intended or actual real-world physical product, system, or process that serves as the effectively indistinguishable digital counterpart of it for practical purposes.'' In this talk I challenge the usefulness (indeed, the point) of such massive models and argue that based on my experience in healthcare applications, small models can often be far more impactful in practice. pdf
PlenaryTitans of Simulation: Chasing Ambulances and COVID: Stories and Lessons Chair: Susan R. Hunter (Purdue University)
Chasing Ambulances and COVID: Stories and Lessons Shane G. Henderson (Cornell University) Abstract AbstractDespite being a confirmed academic nerd, Shane has had the good fortune to be involved in quite a few projects with real impact. In this talk, he’ll highlight two projects where simulation played a central role. In the first project, some simulation analysis he performed for an ambulance service provider was used in a consequential court case. In the second project, he was a member of a modeling team at Cornell University that played a central role in Cornell’s decision to reopen its Ithaca campus for in-person instruction in the fall of 2020, in the midst of the COVID-19 pandemic. The team further advised Cornell throughout the pandemic on essentially all major decisions. It was a wild ride. He’ll tell these stories and draw lessons from them that he hopes will be useful and thought-provoking, whether you work in academia, industry, government or elsewhere. pdf
Track Coordinator - Advanced Tutorials: Jeff Hong (Fudan University, School of Management), Giulia Pedrielli (Arizona State University) Advanced TutorialsA Tutorial on Nested Simulation Chair: Henry Lam (Columbia University)
A Tutorial on Nested Simulation Guangwu Liu (City University of Hong Kong) and Kun Zhang (Renmin University of China) Abstract AbstractNested simulation refers to the problem of estimating a functional of a conditional expectation that cannot be evaluated analytically and requires simulation. It finds a wide range of applications in operations research and machine learning, including portfolio risk measurement, pricing of complex derivatives, and Bayesian experimental design. Nested simulation typically proceeds at two levels. At the outer level, a number of scenarios are simulated. Then, at the inner level, given each scenario, one or multiple samples are simulated for estimating the conditional expectation. Various approaches have been proposed for estimating the conditional expectation and conducting nested simulation itself, focusing on point estimation and/or interval estimation, and addressing general or specified forms of the functional. In this tutorial, we review several approaches to nested simulation and discuss their contributions to the field. pdf Advanced TutorialsA Tutorial for Monte Carlo Tree Search in AI Chair: Guangxin Jiang (City University of Hong Kong)
A Tutorial for Monte Carlo Tree Search in AI Michael C. Fu (University of Maryland), Daniel Qiu (Carnegie Mellon University), and Jie Xu (George Mason University) Abstract AbstractWe provide a tutorial introduction to Monte Carlo tree search (MCTS), a sampling/simulation-based method for solving problems of sequential decision making under uncertainty. MCTS gained notoriety from its pivotal role in Google DeepMind's AlphaZero and AlphaGo, hailed as major breakthroughs in artificial intelligence (AI) due to AlphaGo defeating the reigning human world Go champion Lee Sedol in 2016 and the world's top-ranked Go player Ke Jie in 2017. AlphaZero, without requiring any domain-specific knowledge beyond the game rules (tabula rasa), achieved remarkable success by surpassing previous benchmarks in Go and outperforming leading AI opponents in chess (Stockfish) and shogi (Elmo) after just 24 hours of MCTS-driven reinforcement learning. We demonstrate the building blocks of MCTS and its performance through decision trees and the game of Othello. pdf Advanced TutorialsData-driven Simulation Optimization in the Age of Digital Twins: Challenges and Developments Chair: Eunhye Song (Georgia Institute of Technology)
Data-driven Simulation Optimization in the Age of Digital Twins: Challenges and Developments Enlu Zhou (Georgia Institute of Technology) Abstract AbstractA digital twin is a virtual representation of a physical system, continuously updated with data from the physical system, enabling dynamic information exchange and decision-making. This bidirectional interaction, the key distinction from traditional simulations, introduces unique challenges to simulation optimization. These challenges encompass streaming data, system nonstationarity, and real-time decision-making. This tutorial reviews recent advancements and methodologies aimed at addressing these issues, with a focus on data-driven simulation optimization under streaming data. pdf Advanced TutorialsConcept of Digital Twins for Autonomous Manufacturing through Virtual Learning and Commissioning Chair: Zach Eyde (Arizona State University, Intel Corporation)
Concept of Digital Twins for Autonomous Manufacturing through Virtual Learning and Commissioning Young Jae Jang (KAIST, Daim Research); Jaeung Lee and Ferdinandz Japhne (KAIST); and Sangpyo Hong, Seol Hwang, and Illhoe Hwang (Daim Research) Abstract AbstractAutonomous manufacturing represents a paradigm shift in industrial operations, akin to autonomous driving. This paper explores the role of digital twins in enabling autonomous decision-making within discrete manufacturing environments operated by massive fleets of robotic agents. By integrating artificial intelligence (AI), particularly reinforcement learning, digital twins facilitate the management of complex automated material handling systems (AMHS), driving the transition towards software-defined factories (SDFs). In this paper, we demonstrate how digital twins support virtual learning, training the parameters for reinforcement learning, as well as virtual commissioning, optimizing system validation and testing through virtual and physical integrations. pdf Advanced TutorialsTutorial: Artificial Neural Networks for Discrete-event Simulation Chair: Jun Luo (Shanghai Jiao Tong University)
Tutorial: Artificial Neural Networks for Discrete-event Simulation Peter Haas (University of Massachusetts Amherst) Abstract AbstractThis advanced tutorial explores some recent applications of artificial neural networks (ANNs) to stochastic discrete-event simulation (DES). We first review some basic concepts and then give examples of how ANNs are being used in the context of DES to facilitate simulation input modeling, random variate generation, simulation metamodeling, optimization via simulation, and more. Combining ANNs and DES allows exploitation of the deep domain knowledge embodied in simulation models while simultaneously leveraging the ability of modern ML techniques to capture complex patterns and relationships in data. pdf Advanced TutorialsDistributed Model Exploration with EMEWS Chair: Zach Eyde (Arizona State University, Intel Corporation)
Distributed Model Exploration with EMEWS Nicholson Collier, Justin M. Wozniak, Arindam Fadikar, Abby Stevens, and Jonathan Ozik (Argonne National Laboratory) Abstract AbstractAs high-performance computing resources have become increasingly available, new modes of applying and experimenting with simulation and other computational tools have become possible. This tutorial presents recent advancements to the Extreme-scale Model Exploration with Swift (EMEWS) framework. EMEWS is a high-performance computing (HPC) model exploration (ME) framework, developed for large-scale analyses (e.g., calibration, optimization) of computational models. We focus on three new use-inspired EMEWS capabilities, improved accessibility through binary installation, a new decoupled architecture (EMEWS DB) and task API for distributing workflows on heterogeneous compute resources, and improved EMEWS project creation capabilities. We present a complete worked example where EMEWS DB is used to connect a Python Bayesian optimization algorithm to worker pools running both locally and on remote compute resources. The example, including an R version, and additional details on EMEWS are made available on a public website. pdf Advanced TutorialsComplex Systems Modeling and Analysis Chair: Yijie Peng (Peking University)
Complex Systems Modeling and Analysis Claudia Szabo (University of Adelaide) Abstract AbstractUndesired or unexpected properties are frequent as large-scale, complex systems with non-linear interactions are being designed and implemented to answer real-life scenarios. Modeling these behaviors in complex systems, as well as analysing the large amounts of data generated in order to determine the effects of specific behaviors remains an open problem. In this paper, we explore existing approaches to modeling complex systems. We present an in-depth overview of three main complex systems properties and present they can be modelled in well known scenarios. pdf Advanced TutorialsIntroduction to Optimal Transport Chair: David J. Eckman (Texas A&M University)
Introduction to Optimal Transport Ilya Ryzhov (University of Maryland) and Raghu Pasupathy and Harsha Honnappa (Purdue University) Abstract AbstractWe review optimal transport (OT) which can be informally described as "deforming" a source probability distribution into a target probability distribution with minimal cost. OT was formulated more than two centuries ago and became famous for its relevance to economics and logistics especially after the seminal work on linear programming by Kantorovich, Koopmans, and Dantzig. OT has since seen multiple resurgences, first due to the rise of computer vision in the 1990s, and more recently due to the maturing of large-scale optimization solvers alongside the rise of machine learning and artificial intelligence. This tutorial formally introduces OT to a simulation audience that is steeped in the concepts of operations research (OR). The early parts of the tutorial focus on application contexts. This is followed by the Monge and Kantorovich OT formulations along with key structural results and examples. The tutorial ends with a short section on semidiscrete OT. pdf Advanced TutorialsAdvanced Tutorial: Label-Efficient Two-Sample Tests Chair: Raghu Pasupathy (Purdue University)
Advanced Tutorial: Label-Efficient Two-Sample Tests Weizhi Li (Los Alamos National Laboratory) and Visar Berisha and Gautam Dasarathy (Arizona State University) Abstract AbstractHypothesis testing is a statistical inference approach used to determine whether data supports a specific hypothesis. An important type is the two-sample test, which evaluates whether two sets of data points are from identical distributions. This test is widely used, such as by clinical researchers comparing treatment effectiveness. This tutorial explores two-sample testing in a context where an analyst has many features from two samples, but determining the sample membership (or labels) of these features is costly. In machine learning, a similar scenario is studied in active learning. This tutorial extends active learning concepts to two-sample testing within this label-costly setting while maintaining statistical validity and high testing power. Additionally, the tutorial discusses practical applications of these label-efficient two-sample tests. pdf Advanced TutorialsSimulation and AI for Critical Infrastructure Chair: Enver Yucesan (INSEAD)
Simulation and AI for Critical Infrastructure Qing-Shan Jia (Tsinghua University); Chao Duan (Xi'an Jiaotong University); and Shuo Feng, Yuhang Zhu, and Xiao Hu (Tsinghua University) Abstract AbstractSimulation and artificial intelligence (AI) have played crucial roles in the design and operational optimization of critical infrastructures in modern societies. In this work we briefly review the latest development in four fields, namely the stability analysis and supply demand matching in electric power grid, the efficient simulation in autonomous driving, and the optimization of the power grid for AI itself. We wish this tutorial may shed some light on the synergy between simulation and AI for critical infrastructure in the near future. pdf
Track Coordinator - Agent-Based Simulation: Andrew J. Collins (Old Dominion University), Chris Kuhlman (Virginia Tech) Agent-Based Simulation, Healthcare and Life Sciences, Hybrid SimulationCross-Track Session 1: Applications Chair: Burla Ondes (Purdue University)
Enhancing Forced Displacement Simulations: Integrating Health Facilities for Automatically Generated Routes Networks Alireza Jahani, Maziar Ghorbani, Diana Suleimenova, Yani Xue, and Derek Groen (Brunel University London) Tags: agent-based, optimization, healthcare, logistics, transportation Abstract AbstractThis paper introduces a novel approach to supporting healthcare accessibility for refugees during their movement to camps in regions with limited infrastructure. We achieve this by integrating the density of healthcare facilities into route networks created by customized pruning algorithms. Through rigorous data analysis and algorithm development, our research aims to optimize healthcare delivery routes and enhance healthcare outcomes for displaced populations. Our findings highlight Visit Tracking route pruning as the most effective method, with an Averaged Relative Difference (ARD) of 0.3837. Particularly in scenarios involving healthcare facility integration, this method outperforms others, including the manual extracted route network (0.4902), Direct Distance pruning (0.3912), Triangle pruning (0.3913), and Sequential Distance pruning (0.7846). Despite the inherent limitations of our proposed method, such as data availability and computational complexity, these quantifiable results underscore its potential contributions to healthcare planning, policy development, and humanitarian assistance efforts worldwide. pdfPatient Assignment and Prioritization for Multi-Stage Care with Reentrance Wei Liu, Mengshi Lu, and Pengyi Shi (Purdue University) Tags: discrete-event, healthcare, Matlab Abstract AbstractIn this paper, we study a queueing model that incorporates patient reentrance to reflect patients' recurring requests for nurse care and their rest periods between these requests. Within this framework, we address two levels of decision-making: the priority discipline decision for each nurse and the nurse-patient assignment problem. We introduce the shortest-first and longest-first rules in the priority discipline decision problem and show the condition under which each policy excels through theoretical analysis and comprehensive simulations. For the nurse-patient assignment problem, we propose two heuristic policies. We show that the policy maximizing the immediate decrease in holding costs outperforms the alternative policy, which considers the long-term aggregate holding cost. Additionally, both proposed policies significantly surpass the benchmark policy, which does not utilize queue length information. pdfSimulation-based Optimization for Large-scale Perishable Agri-food Cold Chain in Rwanda: Agent-based Modeling Approach Aghdas Badiee, Adam Gripton, and Philip Greening (Heriot-Watt University) and Toby Peters (University of Birmingham) Tags: agent-based, optimization, supply chain, AnyLogic, Python Abstract AbstractThe global food supply chain faces significant challenges in maintaining the quality and safety of perishable agri-food products. This study introduces a novel approach to demonstrate the efficiency of using the perishable agri-food cold supply chain (FCC) by integrating optimization techniques and agent-based modeling (ABM) simulation. Addressing complexities and challenges such as precise temperature control, emission reduction, waste minimization, and finding the best implementation of cold chain infrastructure, the research applies ABM to model dynamic interactions within the FCC. By testing thousands of simulation scenarios in AnyLogic, the paper demonstrates how the proposed model can support strategic decision-making, demonstrate potential export levels, assess crop quality over time, and evaluate waste reduction compared to non-cold chain scenarios. The research further discusses the implementation of the proposed model in a real case study in Rwanda, Africa, showcasing its contribution to optimizing configuration, reducing food loss and CO2 emissions. pdf Agent-Based SimulationAgent-Based Modeling and Transportation Chair: Andrew J. Collins (Old Dominion University)
Modeling Urban Transport Choices: Incorporating Sociocultural Aspects Kathleen Salazar-Serna (Universidad Nacional de Colombia, Pontificia Universidad Javeriana) and Lorena Cadavid and Carlos J. Franco (Universidad Nacional de Colombia) Abstract AbstractThis paper introduces an agent-based simulation model aimed at understanding urban commuters’ mode choices and evaluating the impacts of transport policies to promote sustainable mobility. Crafted for developing countries, where utilitarian travel heavily relies on motorcycles, the model integrates sociocultural factors that influence transport behavior. Multinomial models and inferential statistics applied to survey data from Cali, Colombia, inform the model, revealing significant influences of sociodemographic factors and travel attributes on mode choice. Findings highlight the importance of cost, time, safety, comfort, and personal security, with disparities across socioeconomic groups. Policy simulations demonstrate positive
responses to interventions like free public transportation, increased bus frequency, and enhanced security, yet with modest shifts in mode choice. Multifaceted policy approaches are deemed more effective, addressing diverse user preferences. Outputs can be extended to cities with similar sociocultural characteristics and transport dynamics. The methodology applied in this work can be replicated for other regions. pdfExploring the Influences of Automated Shuttles on Mobility Pattern and Traffic System at Different Granularity Levels Yun-Pang Flötteröd (German Aerospace Center (DLR)), Johannes Müller (AIT Austrian Institute of Technology GmbH), Daniel Krajzewicz and Jakob Erdmann (German Aerospace Center (DLR)), and Christian Rudloff (AIT Austrian Institute of Technology GmbH) Tags: agent-based, transportation Abstract AbstractThis paper shows the abilities of MATSim and SUMO to reflect the real-world behavior of automated shuttles (AS) and their applicability to replicate and evaluate scenarios. Possible impacts of introducing AS at street- and city-level were investigated regarding the permitted low operating speeds of AS. Two simulation studies in Salzburg, Austria, and in Linköping, Sweden, were conducted. Together with the real pilot plans, different scenarios, considering operational speed, frequency and service type, were addressed. No significant impacts on the respective traffic systems were revealed, even though AS operated at a low speed (15-20 km/h). It is mainly due to the relatively low traffic density and other characteristics of the selected routes. Higher travel speed and Demand-Responsive-Transport operation help to improve the efficiency of AS and to increase the public acceptance. Though, additional push measures are needed to change the domination of cars in a rural setting for influencing modal shift. pdfAssessing the Effects of Container Handling Strategies on Enhancing Freight Throughput Sarita Rattanakunuprakarn, Mingzhou Jin, Mustafa Can Camur, and Xueping Li (University of Tennessee, Knoxville) Tags: agent-based, output analysis, system dynamics, supply chain, transportation Abstract AbstractAs global supply chains and freight volumes grow, the U.S. faces escalating transportation demands. The heavy reliance on road transport, coupled with the underutilization of the railway system, results in congested highways, prolonged transportation times, higher costs, and increased carbon emissions. California's San Pedro Port Complex (SPPC), the nation's busiest, incurs a significant share of these challenges. We utilize an agent-based simulation to replicate real-world scenarios, focusing on the intricacies of interactions in a modified intermodal inbound freight system for the SPPC. This involves relocating container classification to potential warehouses in California, Utah, Arizona, and Nevada, rather than exclusively at port areas. Our primary aim is to evaluate the proposed system's efficiency, considering cost and freight throughput, while also examining the effects of workforce shortages. Computational analysis suggests that strategically installing intermodal capabilities in select warehouses can reduce transportation costs, boost throughput, and foster resource balance in port complex areas. pdf Agent-Based SimulationAgent-Based Modeling and Systematic Mapping Chair: Andrew J. Collins (Old Dominion University)
Accelerating Hybrid Agent-Based Models and Fuzzy Cognitive Maps: How to Combine Agents who Think Alike? Philippe J. Giabbanelli (Old Dominion University) and Jack Beerman (University of Virginia) Tags: agent-based, sampling, variance reduction, behavior Abstract AbstractWhile Agent-Based Models can create detailed artificial societies based on individual differences and local context, they can be computationally intensive. Modelers may offset these costs through a parsimonious use of the model, for example by using smaller population sizes (which limits analyses in sub-populations), running fewer what-if scenarios, or accepting more uncertainty by performing fewer simulations. Alternatively, researchers may accelerate simulations via hardware solutions (e.g., GPU parallelism) or approximation approaches that operate a tradeoff between accuracy and compute time. In this paper, we present an approximation that combines agents who `think alike', thus reducing the population size and the compute time. Our innovation relies on representing agent behaviors as networks of rules (Fuzzy Cognitive Maps) and empirically evaluating different measures of distance between these networks. Then, we form groups of think-alike agents via community detection and simplify them to a representative agent. Case studies show that our simplifications remain accuracy. pdfA Systematic Comparison for Consistent Scenario Development Using Microscopic Simulation Software Abhilasha J. Saroj and Guanhao Xu (Oak Ridge National Laboratory), Yunli Shao (University of Georgia), and Chieh (Ross) Wang (Oak Ridge National Laboratory) Tags: agent-based, discrete-event, digital twin, transportation, Python Abstract AbstractThis study aims to explore a methodology that enables the development of consistent traffic microsimulation for emerging traffic and vehicle control technologies for improved mobility and energy efficiency across different modeling platforms. Researchers might study the same application on different platforms and have the need to benchmark across platforms. However, there lacks a systematic study on simulation software comparison, especially for emerging mobility and energy efficiency applications. For this, a systematic scenario development and evaluation approach is presented and demonstrated to compare scenarios generated in different traffic microsimulation platforms. Network-level and vehicle-level trip performance results of the traffic scenario are evaluated in three microscopic simulation platforms — VISSIM, AIMSUN, and
SUMO. The results indicate that the network-level performance is consistent among the three software suites except when the demand is high, where the energy consumption performance varies. pdfCausal-based Rack Layout Optimization in Retail: Incorporating Agent-based Modeling and Causal Discovery Shuang Chang, Shohei Yamane, and Koji Maruhashi (Fujitsu Laboratories Ltd.) Tags: agent-based, output analysis Abstract AbstractRack layout design and optimization is a main research problem in the realm of retail management. To optimize the rack layouts considering customers’ preferences and routing behaviors against different layouts, agent-based models (ABM) have been developed and applied. However, applying conventional analysis methods to analyze the model may not be sufficient to proactively propose and evaluate explainable layout patterns that optimize pre-defined metrics. In this work, we extend a causal-based ABM analysis method to enable a causal understanding of ABM models across multiple scenarios and incorporate it to a real-world data calibrated model that simulates the customers’ in-store traffic for rack layout optimization. By elucidating the causal relations and changes among customers’ preferences, movements, and layout
patterns, we demonstrated that the incorporated model and analysis method can enable causal explanation empowered rack layout optimization for improving the customer experience and store revenue, compared with conventional ABM analysis methods. pdf Agent-Based SimulationAgent-Based Modeling and Emerging Trends Chair: Andrew J. Collins (Old Dominion University)
A Framework of Digital Twins for Modeling Human-Subject Word Formation Experiments Hao He, Xueying Liu, Chris J. Kuhlman, and Xinwei Deng (Virginia Tech) Tags: agent-based, data analytics, estimation, behavior, student Abstract AbstractAgent-based models (ABMs) are used to simulate human-subject experiments. A comprehensive understanding of these human systems often requires executing large numbers of simulations, but these requirements are constrained by computational and other resources. In this work, we build a framework of digital twins for modeling human-subject experiments. The framework has three modules: ABMs of player behaviors built from game data; extensions of these models to represent virtual assistants (agents that are exogenously manipulated to create controlled environments for human agents); and an uncertainty quantification module composed of functional ANOVA and a Gaussian process-based emulator. The emulator is built from the extended ABM; we focus on emulator validation. By incorporating experimental data and agent-based simulation data, our proposed framework enhances the virtual representation of the dynamics in human-subject word formation experiments, which we consider a digital twin. Networked anagram experiments are used as an exemplar to demonstrate the methods. pdfIntegrating Large Language Models into Agent Models for Multi-Agent Simulations: Preliminary Report Hiromitsu Hattori, Arata Kato, and Mamoru Yoshizoe (Ritsumeikan University) Tags: agent-based, behavior Abstract AbstractThere have been active attempts to integrate agents with large language models into various intelligent systems. In this paper, we describe an effort to integrate LLM into an agent model for multi-agent simulation (MAS). One challenge in implementing MAS has been how to construct a computational model that accurately simulates human behaviors within the target environment. Building a model capable of capturing and reproducing the individualities of a diverse range of people has been challenging, both in terms of implementation costs and complexity. We propose a method to generate behavioral individualities using LLM, and to enable decision-making based on the surrounding environment. We conduct MAS incorporating agents based on the proposed method and verify the validity of their behaviors. pdfSimulation-based Analysis of Hydrogen Refuelling Station to Support Future Hydrogen Trucks and Technological Advances Abderrahim Ait Alla, Eike Broda, Michael Teucke, Lennart Steinbacher, Stephan Oelker, and Freitag Michael (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) Tags: agent-based, discrete-event, logistics, AnyLogic Abstract AbstractHydrogen is becoming increasingly relevant as an energy source to decrease carbon dioxide emissions across transportation, industry, and power generation sectors. One step to achieving this goal is to endorse the utilisation of hydrogen as a fuel in the transportation sector. A prerequisite for the widespread adoption of hydrogen trucks is accessible infrastructure, including refuelling stations. This paper presents a simulation model that analyses the potential spread of hydrogen infrastructure in the transport sector, focusing on refuelling station numbers and the feasibility of constructing new stations based on the projected increase in hydrogen trucks and refuelling speed. By simulating different scenarios, the model assesses infrastructure needs to meet projected hydrogen truck demand in the Bremen region in Northern Germany. Results suggest that with a refuelling rate of 5 kg/min and twelve stations, scenarios with over 50% hydrogen truck share are viable. pdf Agent-Based SimulationAgent-Based Modeling and Health Chair: Andrew J. Collins (Old Dominion University)
Incorporating The COM-B Model For Behavior Change Into An Agent-based Model of Smoking Behaviors: An Object-oriented Design David Tian, Hazel Y. Squires, Charlotte Buckley, and Duncan Gillespie (University of Sheffield); Harry Tattan-Birch, Lion Shahab, and Robert West (University College London); Alan Brennan (University of Sheffield); Jamie Brown (University College London); and Robin Purshouse (University of Sheffield) Tags: agent-based, conceptual modeling, behavior, complex systems, Python Abstract AbstractModeling trajectories in cigarette smoking prevalence, initiation and quitting for populations and subgroups of populations is important for policy planning and evaluation. This paper proposes an agent-based model (ABM) design for simulating the smoking behaviors of a population using the Capability, Opportunity, Motivation - Behavior (COM-B) model. Capability, Opportunity and Motivation are modeled as latent composite attributes which are composed of observable factors associated with smoking behaviors. Three forms of the COM-B model are proposed to explain the transitions between smoking behaviors: initiating regular smoking uptake, making a quit attempt and quitting successfully. The ABM design follows object-oriented principles and extends an existing generic software architecture for mechanism-based modeling. The potential of the model to assess the impact of smoking policies is illustrated and discussed. pdfAgent-Based Simulation Framework for Multi-Variant Surveillance Sifat Afroj Moon (Oak Ridge National Laboratory, University of Virginia); Jiangzhuo Chen, Baltazar Espinoza, Bryan Lewis, and Madhav Marathe (University of Virginia); Joseph Outten (Metaform); and Srinivasan Venkatramanan, Anil Vullikanti, and Andrew Warren (University of Virginia) Tags: agent-based, sampling, complex systems, COVID, digital twin Abstract AbstractEarly detection of an emerging VOC (Variant-Of-Concern) is essential for effective preparedness for a disease like COVID-19. The spreading of an emerging VOC not only depends on the disease dynamics of itself but also depends on the state of the circulating variants and the susceptibility of the population. Resources for testing are typically quite limited, and a number of strategies have been considered for deploying them. However, it has been difficult to evaluate the performance of such strategies, especially higher order effects, and inequities, while incorporating constraints on these resources. Here, we develop an agent-based surveillance framework, NETWORKDETECT, to understand the early warning system of an emerging VOC. Our framework allows us to incorporate various population heterogeneities and resource constraints. pdf
Track Coordinator - Analysis Methodology: Ben Feng (University of Waterloo), Eunhye Song (Georgia Institute of Technology), Wei Xie (Northeastern University) Analysis MethodologyMachine Learning and AI Chair: Wei Xie (Northeastern University)
LLM Enhanced Machine Learning Estimators for Classification Yuhang Wu (University of California, Berkeley); Yingfei Wang (University of Washington); Chu Wang (Amazon); and Zeyu Zheng (University of California, Berkeley) Abstract AbstractPre-trained large language models (LLM) have emerged as a powerful tool for simulating various scenarios and generating informative output given specific instructions and multimodal input. In this work, we analyze the specific use of LLM to enhance a classical supervised machine learning method for classification problems. We propose a few approaches to integrate LLM into a classical machine learning estimator to further enhance the prediction performance. We examine the performance of the proposed approaches through both standard supervised learning binary classification tasks, and a transfer learning task where the test data observe distribution changes compared to the training data. Numerical experiments using four publicly available datasets are conducted and suggest that using LLM to enhance classical machine learning estimators can provide significant improvement on prediction performance. pdfEnhancing Language Model with Both Human and Artificial Intelligence Feedback Data Best Contributed Theoretical Paper Haoting Zhang, Jinghai He, Jingxu Xu, Jingshen Wang, and Zeyu Zheng (University of California, Berkeley) Tags: machine learning, optimization Abstract AbstractThe proliferation of language models has marked a significant advancement in technology and industry in recent years. The training of these models largely involves human feedback, a procedure that faces challenges including intensive resource demands and subjective human preferences. In this work, we incorporate feedback provided by artificial intelligence (AI) models instead of relying entirely on human feedback. We propose a simulation optimization framework to train the language model. The objective function for training is approximated using feedback from both human and AI models. We employ the method of control variate to reduce the variance of the approximated objective function. Additionally, we provide a procedure for deciding the sample size to acquire preferences from both human and AI models. Numerical experiments demonstrate that our proposed procedure enhances the performance of the language model. pdfSensitivity Analysis on Interaction Effects of Policy-Augmented Bayesian Networks Junkai Zhao and Jun Luo (Shanghai Jiao Tong University), Wei Xie (Northeastern University), and Zixuan Bai (Shanghai Jiao Tong University) Abstract AbstractBiomanufacturing plays an important role in supporting public health and the growth of the bioeconomy. Modeling and studying the interaction effects among various input variables is very critical for obtaining a scientific understanding and process specification in biomanufacturing. In this paper, we use the Shapley-Owen indices to measure the interaction effects for the policy-augmented Bayesian network (PABN) model, which characterizes the risk- and science-based understanding of production bioprocess mechanisms. In order to facilitate efficient interaction effect quantification, we propose a sampling-based simulation estimation framework. In addition, to further improve the computational efficiency, we develop a non-nested simulation algorithm with sequential sampling, which can dynamically allocate the simulation budget to the interactions with high uncertainty and therefore estimate the interaction effects more accurately under a total fixed budget setting. pdf Analysis MethodologyDigital Twins and Calibration Chair: Linyun He (Georgia Institute of Technology)
Digital Twin Calibration for Biological System-of-Systems: Cell Culture Manufacturing Process Fuqiang Cheng, Wei Xie, and Hua Zheng (Northeastern University) Abstract AbstractBiomanufacturing innovation relies on an efficient Design of Experiments (DoEs) to optimize processes and product quality. Traditional DoE methods, ignoring the underlying bioprocessing mechanisms, often suffer from a lack of interpretability and sample efficiency. This limitation motivates us to create a new optimal learning approach for digital twin model calibration. In this study, we consider the cell culture process multi-scale mechanistic model, also known as Biological Systems-of-Systems (Bio-SoS). This model with a modular design, composed of sub-models, allows us to integrate data across various production processes. To calibrate the Bio-SoS digital twin, we evaluate the mean squared error of model prediction and develop a computational approach to quantify the impact of parameter estimation error of individual sub-models on the prediction accuracy of digital twin, which can guide sample-efficient and interpretable DoEs. pdfCalibrating Digital Twins via Bayesian Optimization with a Root Finding Strategy Yongseok Jeon and Sara Shashaani (North Carolina State University) Abstract AbstractCalibrating digital twins is a challenging tasks that various methodologies have been used to address. Bayesian Optimization is a prominent tool for this purpose albeit with computational limitations. We propose root-finding strategies within the Bayesian Optimization framework that is tailored for digital twin calibration task. Employing root-finding scheme introduces new acquisition functions and offer unique advantages over traditional minimization strategy, particularly when using a continuous surrogate model. We demonstrate our findings through a range of motivating examples and calibration tasks. pdfDigital Twin Validation with Multi-epoch, Multi-variate Output Data Linyun He (Georgia Institute of Technology), Luke Rhodes-Leader (Lancaster University), and Eunhye Song (Georgia Institute of Technology) Abstract AbstractThis paper studies validation of a simulation-based process digital twin (DT). We assume that at any point the DT is queried, the system state is recorded. Then, the DT simulator is initialized to match the system state and the simulations are run to predict the key performance indicators (KPIs) at the end of each time epoch of interest. Our validation question is if the distribution of the simulated KPIs matches that of the system KPIs at every epoch. Typically, these KPIs are multi-variate random vectors and non-identically distributed across epochs making it difficult to apply the existing validation methods. We devise a hypothesis test that compares the marginal and joint distributions of the KPI vectors, separately, by transforming the multi-epoch data to identically distributed observations. We empirically demonstrate that the test has good power when the system and the simulator sufficiently differ in distribution. pdf Analysis MethodologyAdvanced Computational Methods Chair: Isabelle Rao (University of Toronto)
Zero Stability in Hierarchical Co-Simulation Irene Hafner (dwh GmbH); Martin Bicher (TU Wien); and Niki Popper (TU Wien, dwh GmbH) Tags: distributed, parallel, complex systems Abstract AbstractThis work presents investigations on zero-stability of hierarchical co-simulation methods with an arbitrary number of co-simulation levels. In comparison to traditional co-simulation, where all participating systems are coordinated by a single co-simulation, hierarchical co-simulation allows the introduction of further co-simulations on several levels beneath a top-level co-simulation. This way, individual macro step sizes and orchestration algorithms may be used on every level. In this paper, we investigate the implications of the introduction of such a hierarchy, which may extend to an arbitrarily chosen number of levels, on the important convergence property of zero stability. pdfUsing COSIMLA Within Policy Iteration for MDPs with Large State Spaces Yifu Tang (University of California, Berkeley); Peter W. Glynn (Stanford University); and Zeyu Zheng (University of California, Berkeley) Tags: Monte Carlo, complex systems, student Abstract AbstractClassical policy iteration methods to solve Markov Decision Processes (MDPs) incur a computational complexity that critically depends on the size of state space. Such computational complexity can be prohibitive when the size of state space is enormous or even countably infinite. To improve the computational effectiveness, we develop a computational method that strategically integrates policy iteration and a recently developed approach called COSIMLA (Combining Numerical Linear Algebra and Simulation; Zheng et al. 2022). We provide analysis for the proposed computational method and demonstrate its comparative advantages through numerical experiments. pdfFast Stochastic Epidemic Simulations And An Adaptation Of The Next Generation Matrix For A COVID-19 Epidemic Model Of Social Distancing Isabelle Rao (University of Toronto) and Stephen Chick (INSEAD) Abstract AbstractDirect stochastic simulations of medium to large scale Markovian processes with population dynamics may have runtimes that are proportional to the population size, if they account for each state transition of each individual in the population. Several approaches to speed up such simulations have been proposed. We use a discrete-time, Euler-forward type approximation for state transition functions that simulates all transitions within a given time step in an effort to improve run times, at the expense of some (potentially correctable) bias. We illustrate this with a stylized model of COVID-19 social distancing interventions in the United Arab Emirates. We also adapt the next generation matrix method of Hill and Longini (2003) to a continuous time, discrete state model. The approach accelerates simulation run times from a linear scaling of run times in population size to a constant that depends on the number of possible state transitions. pdf Analysis MethodologyEstimation Techniques Chair: Xi Chen (Virginia Tech)
Nested Simulation for Value-at-Risk with Precision Tolerance Xianyu Kuang, Guangwu Liu, and Qianwen Zhu (City University of Hong Kong) Abstract AbstractThis paper investigates nested simulation for estimating Value-at-Risk (VaR), a widely adopted risk measure in practice. We formulate a mathematical framework tailored to a practical setting where a risk estimate is measured up to a certain precision level, and any risk estimate within certain range specified by the precision tolerance level is deemed acceptable. Within this framework, we propose a rounded estimator of VaR that explicitly accounts for the pre-specified precision tolerance level, and demonstrate that its error can decay exponentially fast as the sampling budget increases. An important implication of our theoretical result is that a finite inner sample size may suffice for nested simulation in our setting, leading to a budget allocation rule that deviates substantially from the standard nested simulation procedure. Numerical examples confirm the theoretical findings, showcasing the consistent performance of the proposed rounded estimator. pdfSome Asymptotic Regimes for Quantile Estimation Marvin K. Nakayama (New Jersey Institute of Technology) and Bruno Tuffin (Inria, University of Rennes) Tags: estimation, Monte Carlo, output analysis, variance reduction, rare events Abstract AbstractThe paper examines the relative errors (REs) of quantile estimators of various stochastic models under different asymptotic regimes. Depending on the particular limit considered and the Monte Carlo method applied, the RE may be vanishing, bounded, or unbounded. We provide examples of these possibilities. pdfNested Heteroscedastic Gaussian Process for Simulation Metamodeling Jin Zhao and Xi Chen (Virginia Tech) Tags: data analytics, estimation, machine learning, metamodeling, output analysis Abstract AbstractThis paper introduces the nested heteroscedastic Gaussian process approach (NHGP) to tackle simulation metamodeling with large-scale heteroscedastic datasets. NHGP achieves scalability by aggregating sub-stochastic kriging (sub-SK) models built on disjoint subsets of a large-scale dataset, making it user-friendly for SK users. We show that the NHGP predictor possesses desirable statistical properties, including being the best linear unbiased predictor among those built by aggregating sub-SK models and being consistent. The numerical experiments demonstrate the competitive performance of NHGP. pdf Analysis MethodologyVariance Reduction Chair: Guangwu Liu (City University of Hong Kong)
An Improved Halton Sequence for Implementation in Quasi-Monte Carlo Methods Nathan Kirk and Christiane Lemieux (University of Waterloo) Abstract AbstractDespite possessing the low-discrepancy property, the classical d-dimensional Halton sequence is known to exhibit poorly distributed projections when d becomes even moderately large. This, in turn, often implies bad performance when implemented in quasi-Monte Carlo (QMC) methods in comparison to, for example, the Sobol’ sequence. As an attempt to eradicate this issue, we propose an adapted Halton sequence built by integer and irrational based van der Corput sequences and show empirically improved performance with respect to the accuracy of estimates in numerical integration and simulation. In addition, for the first time, a scrambling algorithm is proposed for irrational based digital sequences. pdfPre-Scrambled Digital Nets for Randomized Quasi-Monte Carlo Pierre L'Ecuyer, Youssef Cherkanihassani, and Mohamed El Amine Derkaoui (University of Montreal) Abstract AbstractWhen using digital nets for randomized quasi-Monte Carlo, the generating matrices are usually randomly scrambled before applying a random digital shift. This scrambling is to remove the excessive structure that the original matrices may have. In this paper, we explore the idea of pre-scrambling the generating matrices to "optimize" them in some way so that applying the random digital shift alone becomes sufficient. For the optimization criterion, we experiment with a class of recently-proposed figures of merit based on truncated versions of error and variance bounds obtained by bounding the coefficients in the Walsh expansion of smooth integrands. We summarize our numerical experiments and some difficulties encountered. pdfAn Efficient Finite Difference Approximation Guo Liang (Renmin University of China), Guangwu Liu (City University of Hong Kong), and Kun Zhang (Renmin University of China) Abstract AbstractEstimating stochastic gradients is pivotal in fields like service systems in operations research. The classical method for this estimation is the finite difference approximation, which entails generating samples at perturbed inputs. Nonetheless, practical challenges persist in determining the perturbation and obtaining an optimal finite difference estimator with the smallest mean squared error (MSE). To tackle this problem, we propose a double sample-recycling approach in this paper. Firstly, pilot samples are recycled to estimate the optimal perturbation. Secondly, recycling these pilot samples and generating new samples at the estimated perturbation lead to an efficient finite difference estimator. In numerical experiments, we apply the estimator in two examples, and numerical results demonstrate its robustness, as well as coincidence with the theory presented, especially in the case of small sample sizes. pdf Analysis MethodologyEfficiency and Accuracy in Ranking and Selection Chair: Jaime Gonzalez (Georgia Institute of Technology)
Finding Feasible Systems for a Stochastic Constraint with Relaxed Tolerance Levels Chuljin Park (Hanyang University), Sigrún Andradóttir and Seong-Hee Kim (Georgia Institute of Technology), and Yuwei Zhou (Indiana University) Abstract AbstractWe consider the problem of finding feasible systems with respect to a stochastic constraint when the performance of each system needs to be evaluated via simulation. We develop a new procedure, referred to as the Indifference-Zone Relaxation procedure, to lessen inefficiencies of existing procedures derived under the assumption that all systems are exactly the tolerance level away from the threshold. Specifically, our procedure introduces a set of relaxed tolerance levels and simultaneously implements two subroutines for each relaxed tolerance level: one to identify clearly feasible systems and the other to exclude clearly infeasible systems. As a result, the proposed procedure allows early determination of feasibility for some systems, while maintaining the statistical guarantee. The efficiency of the procedure is investigated through experimental results. pdfFinite Budget Allocation Improvement in Ranking and Selection Xinbo Shi and Yijie Peng (Peking University) and Bruno Tuffin (Inria, Univ Rennes, CNRS, IRISA) Tags: ranking and selection, Matlab, student Abstract AbstractThis paper introduces a new perspective on the problem of finite sample Ranking and Selection. An asymptotically equivalent approximation to the probability of correct selection in terms of power series is derived beyond the classic large deviations principle that has been widely adopted for the design and measurement of allocation policies. The novel approximation method provides more information on the finite sample performance of allocation policies, based on which a new finite computing budget allocation policy is proposed. The asymptotically equivalent approximation may also serve as an estimate of policy performance after allocating the samples. We develop a simple finite computing budget allocation policy based on our approximation and carry out experiments in various settings to show its superiority. pdfNonparametric Input-Output Uncertainty Comparisons Jaime Gonzalez, Johannes Milz, and Eunhye Song (Georgia Institute of Technology) Abstract AbstractWe consider the problem of inferring the system with the best simulation output mean among k systems when the simulation model is subject to input uncertainty caused by estimated common input models from finite data. The Input-Output Uncertainty Comparisons (IOU-C) procedure is designed to return a set of solutions that contains the best solution with an asymptotic probability guarantee when parametric input models are adopted. We extend this framework to nonparametric IOU-C (NIOU-C) when empirical distributions of the data are adopted as input models. Representing the simulation output mean of each system as a functional of the common empirical distributions via the functional Taylor series expansion, we propose two methods that rely on the nonparametric delta method and an ambiguity set formulation, respectively. We provide numerical examples to test the performance of our methods and show that they outperform the IOU-C. pdf Analysis MethodologyImportance Sampling and Likelihood Method Chair: Ben Feng (University of Waterloo)
Importance Sampling in Optimization Under Uncertainty Using Surrogate Models Xiaotie Chen and David L. Woodruff (UC Davis) Tags: optimization, sampling, student Abstract AbstractFor the purpose of computing the expected value of a stochastic optimization problem via simulation, we propose a method for efficiently constructing importance sampling distributions using surrogate modeling.
A software implementation of the methods called SMAIS is available on github. We use
this software in experiments to demonstrate that our method can outperform Monte Carlo simulation.
We also show good parallel efficiency for up to 16 processors allowing a speed up of more than 10. Our method uses adaptive sample sizes so it
is not very sensitive to sample size parameters. pdfGeneralizing the Generalized Likelihood Ratio Method through a Push-Out Leibniz Integration Approach Best Contributed Theoretical Paper - Finalist Xingyu Ren and Michael C. Fu (University of Maryland, College Park) Tags: discrete-event, estimation, Monte Carlo Abstract AbstractWe generalize the generalized likelihood ratio (GLR) method through a novel push-out Leibniz integration approach. Extending the conventional push-out likelihood ratio (LR) method, our approach allows the sample space to be parameter-dependent after the change of variables. Specifically, leveraging the Leibniz integral rule enables differentiation of the parameter-dependent sample space, resulting in a surface integral in addition to the usual LR estimator, which may necessitate additional simulation. Furthermore, our approach extends to cases where the change of variables only "locally" exists. Notably, the derived estimator includes existing GLR estimators as special cases and is applicable to a broader class of discontinuous sample performances. Moreover, the derivation is streamlined and more straightforward, and the requisite regularity conditions are easier to understand and verify. pdf
Aviation Modeling and Analysis Track Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), Michael Schultz (Bundeswehr University Munich) Aviation Modeling and AnalysisArtificial Intelligence Applications in Aviation Chair: Raul de Celis (Universidad Rey Juan Carlos)
A Neural Network Application for Non-gyroscope Based Aircraft Attitude Determination. Raul de Celis and Luis Cadarso (Universidad Rey Juan Carlos) Tags: machine learning, neural networks, aviation, transportation, Matlab Abstract AbstractAircraft accurate navigation and control depends on the estimation framework concerning attitude and position. Aircraft rotation can be obtained, by measuring two different vectors, for instance, gravity and velocity, in two distinct reference coordinated systems: body and inertial local horizon axes. The inertial reference frame’s velocity vector is determined using Global Navigation Satellite Systems sensors, while the body reference frame’s velocity vector is obtained through integrating accelerometer measurements. To estimate gravity vector in the body-fixed reference frame, an approach is introduced, based on a neural network that determines aircraft aerodynamic forces. Through the derivation of flight dynamics equations, the vector of gravity is finally determined. The combination of these vectors facilitates the body rotation determination. Simulations, employing nonlinear flight dynamics models, demonstrate that accurate aircraft attitude determination is attainable through the integration of just accelerometers, Global Navigation Satellite Systems sensors, and the proposed methodology. pdfGenAir: Generative AI for Resilient Urban Air Mobility with VTOLs in Disaster Evacuation Abdullah Alsaheal, William Hoy, and Priscila Haro (University of Miami); Asad Rehman (Saint Louis University-Madrid); and Nurcin Celik (University of Miami) Abstract AbstractCoastal regions face growing threats, making timely and safe evacuation paramount. Current plans rely heavily on congested ground transportation, leading to delays and heightened stress levels. The emerging field of Urban Air Mobility (UAM), utilizing Vertical Take-Off and Landing Vehicles (VTOLs), promises to alleviate these issues by providing solutions for quick evacuation strategies. Here, we aim to leverage Generative Adversarial Networks (GANs) to expand limited datasets with synthetic data specific to disaster scenarios, evacuation routes, airspace considerations, and the impact of real-time weather events, enabling robust simulation of UAM deployment in disaster evacuations. We identified two applicable scenarios: i) UAM for Extreme Weather Emergency Evacuation and ii) Hospital Evacuations using VTOLs as use cases and illustrated their impact. This research seeks to pave the way for optimized, data-driven evacuation planning with UAM and VTOLs, ultimately enhancing the safety and efficiency of evacuations in the face of extreme events. pdf
Track Coordinator - Commercial Case Studies: Scott Chaney (MOSIMTEC), Wendy Xi Jiang (Northwestern University), Saurabh Parakh (MOSIMTEC, LLC, MOSIMTEC), David T. Sturrock (Simio LLC) Commercial Case StudiesSimulating Material Handling Systems - Part 1 Chair: Amy Greer (MOSIMTEC, LLC)
A Flexible Material Handling Network Model to Refine Major Capital Design Amy Greer and Saurabh Parakh (MOSIMTEC, LLC) and Alan Barnard (Goldratt Research Labs) Abstract AbstractA steel manufacturer plans to expand a facility to become the largest single-location steel manufacturing facility in their country. The current capacity of the plant will be initially doubled within a year, and then doubled again within a few years. The central raw material handling system is located inside the plant area to supply raw material to all operating plants without interruption.
Goldratt Research Labs worked with MOSIMTEC to develop a flexible AnyLogic simulation model, allowing the manufacturer to understand if planned material handling changes will be able to keep up with demand. The AnyLogic model allowed for understanding conveyor network requirements, along with replenishment policies. The Goldratt/MOSIMTEC engagement uncovered critical connections that were missing in the design. The simulation engagement ensured these connections were addressed during the design phase, as opposed to discovering it in the very costly operational phase. pdfOptimizing Material Handling in Metal Stamping Operations: A Case Study on the Implementation of Automated Guided Vehicles Yiyun Fei, Xinping Deng, Jorge Bao, and Roberto Lu (TE Connectivity) Abstract AbstractThis paper discusses material handling automation challenges and future enhancements in metal stamping production at TE Connectivity. Currently, the production environment involves labor-intensive processes, where water spiders manually transport materials. This setup is non-value-added and is complicated by various sizes of reels and boxes, with a high-demand storage area. The proposed solution introduces Automated Guided Vehicles (AGVs) to automate these tasks. To validate and refine the system, discrete event simulations of current and future states were conducted. Initial simulations showed suboptimal AGV utilization, which led to iterative adjustments including task variations and traffic flow modifications, ultimately improving utilization rates significantly. Additional studies on AGV charging locations and operational scenarios helped in fine-tuning the system. This transition to AGVs not only promises operational efficiency but also improves safety and adaptability, facilitating a data-driven approach for a highly beneficial strategic decision-making in production management. pdfIdle Vehicle Allocation Problem in Automated Material Handling Systems: A Case Study in Rechargeable Battery Production Jaeung Lee (KAIST); Jinhyeok Park (Daim Research); and Young Jae Jang (KAIST, Daim Research) Abstract AbstractIn this paper, we investigate the issue of determining and allocating park locations for idle vehicles in Automated Material Handling Systems (AMHSs) within manufacturing industries. We introduce a case study on an AGV system in a rechargeable battery production facility, confirming the effectiveness of heuristic proposed by Bruno et al. (2000) in enhancing AMHS performance. pdf Commercial Case StudiesSimulation to Improve Customer Experience Chair: Howard Mall (Universal Creative, Advanced Technology Interactives)
Streamlining Security: Using Agent-Based Simulation to Reduce Congestion and Enhance Campus Security Screening Lourdes Murphy (National Institutes of Health (NIH)) and Nelson Alfaro Rivas and Yusuke Legard (MOSIMTEC, LLC) Abstract AbstractThe National Institutes of Health (NIH) main campus, in Bethesda, MD, has more than 95 buildings located on over 300 acres. The West Drive Visitor Screening Facility (“Building 68”) is the primary entrance where visitors and their vehicles must go through a security inspection. The building infrastructure dates from before 9/11, after which the security requirements that are in effect today were formulated. In any given hour, there are up to 50 vehicles that use this entrance. NIH developed plans for a facility expansion to reduce both vehicle and pedestrian congestion. MOSIMTEC utilized simulation modeling to provide NIH with insight on the impact of potential layout changes to the building. This presentation further describes the project, the system being modeled, the simulation model’s inputs and outputs, and the key modeling approach, which includes integration of AnyLogic’s road and pedestrian libraries. This presentation also highlights the analysis completed. pdfQuantifying the Impact of Lost Customers in Quick Service Restaurant (QSR) Operations Rainer Dronzek (Simulation Modeling Services, LLC) Abstract AbstractSimulation modeling is used extensively in the quick service restaurant (QSR) industry to help design and improve operations. These models can also be used to quantify the financial, customer satisfaction, and operational performance of QSR systems and processes with respect to lost customers. We'll discuss the types and behaviors of lost customers, relate them to simulation definitions, and see how lost revenue and customer satisfaction levels can be quantified in a simulation model. The session will include a group interactive exercise, visualizations and explanations of lost customer behaviors, a discussion of how this application can evolve to a simulation digital twin, and a demonstration of a simulation model used to analyze and tell the story of lost customers. pdfUniversal Virtual Attraction Controls Emulation Howard Mall, Wanyea Barbel, Jason Heinritz, Angelica Samsoe, and Christopher Welch (Universal Creative) Abstract AbstractControl systems for theme park attractions are increasingly complex integrations of hardware and software. Opening dates for theme park attractions don’t move which leads to compressed commissioning schedules. Virtual Attraction Controls Emulation (VACE) is the platform Universal Creative uses to push the schedule back in time and reduce risk, effort, and costs in the field for all Universal parks in the Universal Destination and Experiences (UDX) portfolio. VACE enables virtual commissioning and the application of Test Driven Development (TDD) principles to industrial software. This platform enables real-time simulation of ride performance, emulation of the ride components, and automated testing as a stand-in for the actual ride. Significant parts of the control system can be tested even before ground is broken for construction. Its successful application to recently opened roller coasters and dark rides will be presented showing the architecture, integration with commercial industrial programming environments, challenges, and measures of success. pdf Commercial Case StudiesSimulating Production Systems - Part 1 Chair: Peter Helmetag (TriMech)
Productivity Evaluation of Introducing New Welding Methods in Deck House Block Factory using Discrete Event Simulation Jeongman Lee, Seungwoo Jeon, and Dongha Lee (1HD Korea Shipbuilding & Offshore Engineering) Abstract AbstractThis study introduces a discrete event simulation-based modeling approach to evaluate productivity impacts when new welding techniques are introduced in deck house block production for shipbuilding. The hull manufacturing process is divided into stages including steel storage, fabrication, assembly, pre-outfitting, painting, and erection, with a focus on deck house blocks in this study. Four key factors—welding speed, welder movement speed, setup time, and finishing time—are used to evaluate the effectiveness of new welding methods, which may not always outperform existing methods. Discrete event simulation is used to analyze these factors, accounting for production volume characteristics and manufacturing logistics complexity. The developed simulation model incorporates welding times and processes to assess productivity, analyzing indicators like total welding length, crane utilization, and cycle times, along with sensitivity analyses of welder numbers. pdfRobotic Brazing Line Using Factory Simulation Engineer Peter Helmetag and Kurt Valutis (TriMech) Abstract AbstractTriMech developed a model in conjunction with a manufacturing client to evaluate a robotic brazing line against production targets across an order of magnitude. The focus of the study was how a 7th axis material handling robot could prioritize tasks for different production schedules and if the line could support the highest production target.
In this presentation the process changes, input data, modelling methods, and results of the study will be covered. A virtual twin of the brazing line was developed using Dassault’s Factory Flow Simulation software solution. This model will be shown as a part of the presentation. pdfSimulation Modeling Methods for Analysis and Validation of Mid-term Production Plan Operations in Shipyard Seungwoo Jeon, Yonghee Kim, Jongpil Yun, Jeongman Lee, and Dongha Lee (HD Korea Shipbuilding & Offshore Engineering) and Donghyun Lee, Jisoo Park, Changha Lee, and Sang Do Noh (Sungkyunkwan University) Abstract AbstractProduction plans in the shipbuilding industry are divided into long-term, mid-term, and short-term production plans. The long-term production plan serves as the foundational blueprint, the mid-term production plan is the detailed production roadmap, and the short-term production plan focuses on execution. The mid-term production plan is crucial as it encompasses all shipbuilding processes and acts as the monthly operational plan for production departments. An accurate mid-term production plan is essential for maintaining competitiveness in shipbuilding production management. In this study, we developed a mid-term production plan simulation model that considers constraints such as operational rules, workload capacity, and various task precedence relationships. By using this simulation, companies can save costs and time during schedule validation and improve productivity by developing a more effective mid-term production plan. pdf Commercial Case StudiesSimulating Production Systems - Part 2 Chair: Chris Tonn (Spirit AeroSystems)
Simulation-Based Analysis of Production Flexibility in Foundries Under Volatile Electricity Prices Johannes Dettelbacher and Alexander Buchele (Ansbach University of Applied Sciences) Abstract AbstractThe energy transition is a challenge for energy-intensive industries such as non-ferrous foundries. It is important to support the transition to renewable energy sources through the electrification of melting plants. In particular, the fluctuating electricity supply and fluctuating electricity prices offer the opportunity to make melting plants more flexible and to keep energy-intensive processes low, especially when energy prices are high. This pilot study investigates how an electrified melting plant can adapt to the electricity market. For this purpose, a simulation model has been developed based on a selected example company. The energy consumption over time and the logistical effects of an operation are considered. The simulation model is implemented as a hybrid simulation combining a discrete event simulation at the plant level and a continuous process simulation within the furnaces. Simulation-based optimization can be used to determine an operational management strategy that is optimized in line with electricity prices. pdfMeta-models for Buffer Sizing in High-Speed Packaging Systems Shannon C. Browning, Elizabeth Ball, Zary Peretz, and Stephen Wilkes (The Haskell Company) Abstract AbstractBuffer sizing is a critical decision when designing high-speed packaging lines for fast-moving consumer goods. Strategically placed buffers provide benefits, but there is not a simple equation for increased system efficiency. Discrete-event simulation tools are commonly used to offer estimates for single scenarios. We introduce an approach using meta-models to inform systems engineers where to place buffers and how to size them. The model captures the decreasing marginal benefits of buffer, including interactions between buffer sizes in different locations. pdfConstraint Identification in Semi-automated Production System Chris Tonn and John Grant (Spirit AeroSystems) Abstract AbstractThe production rate for an established semi-automated production line is expected to increase by 58%. The production line has throughput issues and it is difficult to identify which improvements that will resolve bottlenecks to increase system wide throughput. A Discrete Event Simulation model of the production line was developed to identify bottlenecks, test solutions, and quantify processing duration target levels to meet system throughput level goals. Primary bottlenecks were found in Factory 2 where multiple changes were identified to reduce pulse time by 32%. Model was used to answer other production questions including optimum manpower strategy, batch size, and change over strategy for profiling machines and investigating the impact of adding another station in paint. pdf Commercial Case StudiesSimulating Material Handling Systems - Part 2 Chair: Bahar Biller (SAS Institute, Inc)
Dynamic Multiple-load Mobile Transport Operation in Semiconductor Fab Sungwook Jang (Korea Advanced Institute of Science and Technology) and Young Jae Jang (Korea Advanced Institute of Science and Technology, DAIM Research) Abstract AbstractIn semiconductor fabs, the efficiency of automated material handling systems is critical for maintaining productivity. While single-load mobile transport systems are the conventional choice, multiple-load mobile transport systems have gained attention for their potential to enhance transportation efficiency. However, existing research primarily focuses on the analytical aspects of multiple-load mobile transport systems, leaving a gap in the exploration of advanced operational policies. This study addresses this gap by utilizing an adaptive large neighborhood search algorithm to optimize job assignment and scheduling in dynamic environments. Our approach is validated through simulation experiments using a hypothetical semiconductor fab layout, comparing the proposed policy against various policies. The results demonstrate that our policy significantly reduces average delivery time, showcasing its superiority in dynamic operation. The findings provided valuable insights into how the proposed algorithm can be applied to real-world scenarios, laying the groundwork for future application and testing in an actual semiconductor fab. pdfKeeping Them Flying: Predicting Spare Parts Inventory for Jet Fighters Using SAS Bahar Biller and Jinxin Yi (SAS Institute, Inc) Abstract AbstractAn effective preventive maintenance practice is expected to maximize asset uptime while minimizing need to hold large amounts of spare parts inventory. The development of such a practice requires advanced analytics capability to accurately predict asset lifetime distributions and use these predictions for optimizing management of spare parts inventory. Challenges of developing a spare parts inventory management solution include the need to work with right-censored event data sets for asset lifetime prediction, the assessment of statistical models created to represent asset lifetime distributions, the need to optimize inventory levels under uncertainty, and the validation of the resulting solution with the available historical data sets. For a real project at a large aircraft manufacturer, we develop a solution that overcomes these challenges of predicting spare parts inventory levels with an integrated use of statistical modeling, simulation, and optimization. Our goal is to share this case study at SAS with WSC community. pdfDigital Twin Based Dynamic Routing for Overhead Hoist Transport Systems in Semiconductor FAB Ferdinandz Japhne and Young Jae Jang (Korea Advanced Institute of Science and Technology) Abstract AbstractAutomated Material Handling Systems (AMHS) are a crucial part of modern semiconductor fab facilities. Dozens to hundreds of Overhead Hoist Transport (OHT) vehicles are used to conduct the complex transport processes of semiconductor fab. However, ongoing technical developments and fluctuating operational requirements demand AMHS systems to be configurable and dynamic. These factors are directly affecting transport processes in a semiconductor fab and must be addressed accordingly to maintain the efficiency of the AMHS system. In this research, we use digital twin technology to estimate the effect of AMHS system changes in advance before they occur in the real system. Moreover, we verify the effectiveness and the accuracy of a digital twin in a system that implements the Q(λ) learning method as benchmark dynamic routing. pdf Commercial Case StudiesSimulating Logistics Systems Chair: Yusuke Legard (MOSIMTEC)
Simulating The Economics Of Autonomous Haulage For Mining Trucks Andrey Malykhanov (Amalgama Software Design, MineTwin) and Jaco-Ben Vosloo (JBV Consulting, MineTwin) Abstract AbstractThis study evaluates the economics of autonomous haulage systems (AHS), comparing 100-ton rigid haul trucks to 40-ton vocational trucks within a realistic surface mining scenario. The primary objective was to determine the economic viability of using smaller, more cost-effective autonomous trucks versus larger, traditional manned trucks. Smaller trucks are much cheaper to autonomize but are likely to cause congestion, thus lowering outputs and negating unmanned operations' benefits. Simulation was used to model the effects of truck fleet configurations on congestion, queuing, and overall mining rates. The simulation results were used in optimal life-of-mine (LOM) scheduling and net present value (NPV) calculations. The study demonstrated that using autonomous 40-ton trucks improved the mine's NPV by 31% compared to human-driven 100-ton trucks and 7% over autonomous 100-ton trucks. These findings suggest significant potential for cost-optimizing surface mining operations using AHS based on smaller vocational trucks. pdfSimulation-Based Capital Investment Decision-Making In Steelworks Akira Kumano and Shun Yamamoto (JFE Steel Corporation) Abstract AbstractWe have created a simulator to analyze material logistics within and outside the steelworks, aiding decision-making in capital investment. To ensure profitability, we are consolidating and migrating facilities and production processes. However, introducing new equipment or making significant process changes across multiple sites requires substantial investment and careful consideration. By using the simulator, we assessed logistics process changes before investing in facilities. Additionally, by validating the impact in case of breakdowns or failure, we confirmed the potential to avoid excessive facility investments costing millions of dollars. pdf Commercial Case StudiesBroader Application of Simulation Models Chair: Nelson Alfaro Rivas (MOSIMTEC, LLC)
Microgrid Design Planning for the U.S. Department of Defense Daniel Reich, Giovanna Oriti, Ronald Giachetti, Douglas L. Van Bossuyt, and Susan M. Sanchez (Naval Postgraduate School) Abstract AbstractThe U.S. Department of Defense operates both permanent and temporary installations throughout the world that have critical needs for secure energy that is reliable, efficient, and resilient. Microgrids are a key infrastructure technology for delivering both primary and backup energy systems. We have developed simulation-based methods for modeling energy requirements, measuring the performance of microgrids composed of distributed energy resources, identifying component capacities required to meet energy demands, and analyzing both robustness and resilience of potential microgrid designs. To deliver these capabilities to installation energy managers for use in infrastructure planning, we have released the Microgrid Planner web application. This design tool has been successfully used for the conceptual design of several pilot microgrids; one is currently under construction. pdfGroundwork for Simulation-Based Process Improvement Sanjay Jain (The George Washington University) Abstract AbstractAn organization interested in applying simulation for process improvement has to go through a learning process. The organization has to understand the effort involved in applying simulation and the value it will provide before investing in a simulation-based process improvement project. The learning may need to go through a few iterations before a decision to formally proceed with such a project can be taken. This case study describes the groundwork required before embarking on a simulation-based process improvement project. A recent process assessment project, that did not utilize simulation, is revisited to extract data for simulation and a basic model is developed for exploring the potential. The exercise was also intended to help define additional data that will need to be captured in the next process assessment project for the next iteration. The challenges in such post hoc extraction are discussed together with the limitations of the resulting first model. pdfA Data-driven Framework for Low Code Simulation Development in the Aluminum Industry Ju Jeon, Abdurrahman Yavuz, Adarsh Gadepalli, and Aristotelis Thanos (Novelis) Abstract AbstractEagle Simulate is a novel Simulation Data-Driven Framework for efficient capacity analysis and decision support in industrial aluminum plant operations. The framework is used internally in Novelis and offers a user-friendly interface for intuitive scenario creation and management, and a scenario-driven database for data organization. At the core of the framework lies a model built with AnyLogic to accurately simulate the complexities of aluminum plant operations. A key feature is the use of data schemas for auto-building the simulation model, positioning it as a low-code/no-code solution. This allows users to assemble the model predominantly from the data layer, accelerating development and reducing dependency on traditional coding. We will explore the framework's components, scenario management, and provide insights into the simulation model. This framework represents a significant advancement in swiftly deploying and iterating simulation models, fostering informed decision-making and enhancing efficiency in aluminum production facilities. pdf Commercial Case StudiesSimulation and Machine Learning Chair: Tae Ha (MOSIMTEC LLC)
Scenario-based Risk Management with Temporal Fusion Transformer Irene Aldridge and Daham Kim (Cornell University, AbleMarket Inc.) Abstract AbstractModern risk management practice often calls for Monte-Carlo simulation to visualize future realizations of portfolio assets. Many assets may have interdependent paths, however, introducing considerable complexity into the simulation. For example, stock returns of firms like Apple and Microsoft may be co-evolving given common industry factors. At present, the interdependence is often modeled in simulation via copulas, which may be suboptimal from both computational speed and stationarity assumptions. Instead of Monte-Carlo with copulas, in this paper, we propose modeling with an attention-based model known as the Temporal Fusion Transformer (TFT). We show that the TFT model can provide depth and breadth equivalent or even superlative to that of the Monte-Carlo method by simulating the assets’ complex dynamics in the presence of interdependent factors and qualitative variables. pdfDeploying the MetaPOL Digital Twin for Pattern of Life Analysis of Secure Facilities under Movement Sensor Deployment Restrictions Chathika Gunaratne, Mason Stott, Debraj De, Gautam Thakur, and Chris Young (Oak Ridge National Laboratory) Abstract AbstractVirtual reality-based digital twins are a safe and cost-effective means to monitor and ensure safety within secure facilities. These digital twins can be used as immersive virtual laboratories to evaluate likely behaviors of facility guests and personnel and identify anomalous events that may compromise facility safety and security. Realistic non-playable character behaviors contribute greatly to improving overall digital twin realism. However, due to data collection restrictions, human movement sensor deployment within secure facilities may be delayed or unavailable. In this case study, we address this challenge encountered when deploying the MetaPOL digital twin framework on the High Flux Isotope Reactor facility at Oak Ridge National Laboratory, through the combination of synthetic human movement data generated via an agent-based model driven by anecdotal evidence of human behavior and deep neural network surrogates trained to predict next destination and stay duration for non-playable characters. pdf
Complex and Resilient Systems Track Coordinator - Complex and Resilient Systems: Saurabh Mittal (MITRE Corporation), Claudia Szabo (University of Adelaide, The University of Adelaide) Complex and Resilient SystemsSystem Resilience and Cyber Risk Chair: Claudia Szabo (University of Adelaide, The University of Adelaide)
Cooperative Collision Avoidance for Autonomous Vessels in Mixed Traffic Environment Shivali Verma and Avinash Samvedi (Shiv Nadar Institution of Eminence) Tags: agent-based, complex systems, resiliency, Python Abstract AbstractThe rapid advancement of autonomous vessels and communication technology promises safer, more efficient, and sustainable shipping solutions. While autonomous vessels (AVs) offer the potential to significantly reduce the number of accidents caused by human error, their successful integration into mixed environments hinges on their ability to navigate complex interactions with Manual vessels (MVs) effectively. We study the dynamics of interaction between AVs and MVs under the lens of cooperative and non-cooperative behavior using a cooperative game theory model in a connected mixed environment. Different risk perceptions of AVs and MVs based on ship domains were considered for the estimation of collision risk for different vessel types in mixed traffic. Simulation results validate the proposed collision avoidance strategy in multiple scenarios, demonstrating that the cooperative game approach can help AVs to dynamically adapt their trajectories and effectively obtain collision-free paths amid complex interactions with various encountered vessels. pdfHow Hard is it to Estimate Systemic Enterprise Cyber-Risk? Ranjan Pal (MIT Sloan School of Management), Rohan Sequeira (University of Southern California), and Sander Zeijlemaker (MIT Sloan School of Management) Abstract AbstractSystemic enterprise cyber-risk typically arises when a single (software) vulnerability common across manyenterprise computing devices across the globe is exploited by adversaries, and results in catastrophic aggregate cyber-loss consequences to be borne by CRM entities. Examples of such vulnerability exploitation incidents include the Log4j (2021) and SolarWinds (2020) cyber-attacks. The important question we ask here is: how hard is it to discover these ‘single vulnerabilities’ in enterprise information systems? We prove that answering this question is NP-Hard. Alternatively, leave alone humans, even a computer via cyber-attack simulations might not (in the worst case) discover in finite time such vulnerabilities. Consequently, CRM entities can only expect and prepare for an inevitable catastrophic systemic cyber-incident in time rather than predict the likelihood of one. Likewise, we propose the policy implications of our research for the CRM market stakeholders and elucidate relevant action items for effective systemic enterprise CRM. pdfIs Systemic Cyber Risk Management for Enterprises Sustainable? Best Contributed Applied Paper - Finalist, Best Contributed Theoretical Paper - Finalist Konnie Kangning Duan (Massachusetts Institute of Technology), Rohan Xavier Sequeira (University of Southern California), and Ranjan Pal and Michael Siegel (MIT Sloan School of Management) Tags: estimation, Monte Carlo, conceptual modeling, cybersecurity, supply chain Abstract AbstractBusiness enterprises have grappled in the last one and half decade with unavoidable risks of (major) cyber incidents. The market to manage such risks using cyber insurance (CI) has been growing steadily but is still skeptical of the economic and societal impact of systemic risk across networked supply chains in interdependent IT-driven enterprises. While systemic risk from traditional cyber loss events might lure more capacity to the CI market, such a risk from a catastrophic (CAT) cyber loss event can quite likely reverse this trend. The sustainability of the much viable risk diversification by cyber insurers in these environments depends on (a) the statistical nature of cyber risks that contribute to systemic cyber risk and (b) the interconnection topology between enterprises. We focus here on (a) and solve the theory challenge problem of proposing simulation-validated mathematical conditions on cyber risk distributions that make systemic risk VaR diversification-friendly for CI markets. pdf Complex and Resilient SystemsPrediction and Adaptation in Complex Systems Chair: Claudia Szabo (University of Adelaide, The University of Adelaide)
A Comparative Study Of Price-Driven Production Control Methods Using A Sawmill Simulator Louis Duhem and Maha Benali (Polytechnique Montréal) and Michael Morin and Jonathan Gaudreault (Université Laval) Tags: data driven, sampling, manufacturing, student Abstract AbstractIn today's dynamic markets, decision-making relies heavily on simulation models to evaluate different production control methods. Although price-driven production control methods have proven their effectiveness in exploiting price volatility, certain industries are still reluctant to adopt these methods in their operational decision-making. This research demonstrates the relevance of price-driven methods for the wood products industry. A sawmill simulator is used to illustrate this. Since the simulation of the sawmill production process is time-consuming, we propose a probabilistic sampling-based method to rationalize the dataset size. A comparative study shows that exploiting historical and recent price data increases sawmill revenues. pdfA Data-Driven Intelligent Supply Chain Disruption Response Recommender System Framework Yang Hu and Pezhman Ghadimi (University College Dublin) Tags: validation, complex systems, resiliency, supply chain, student Abstract AbstractIn light of the Industry 4.0 era, the global pandemic, and wars, interest in deploying digital technologies to increase supply chain resilience (SCRes) is rising. The utilization of recommender systems as a supply chain (SC) resilience measure is neglected, although these systems can enhance SC resilience. To address this problem, this research proposed a data-driven supply chain disruption response framework based on intelligent recommender system techniques and implemented the framework with a practical use case. The framework was validated by a System Dynamics (SD) model to demonstrate the effectiveness of the proposed system as a new communication scheme after a SC disruption, considering a demonstrative use case. Results show that the proposed framework can be implemented as an effective SC disruption mitigation measure in the SCRes response phase and help SC participants better react after the SC disruption. pdfRiver Digital Twin for Water Quality Prediction Surabhi Shrivastava, Souvik Barat, Shankar Kausley, Vinay Kulkarni, and Beena Rai (Tata Consultancy Services Ltd) Abstract AbstractRiver is complex system of system, whose dynamics are influenced by multiple factors such as river characteristics (e.g., gradient and terrain), environmental factors (e.g., rainfall and temperature), and human interventions like building dams, and discharging wastewater. To understand and improve river water quality, a river digital twin is created using a combination of agent-based and physics-based models. Agent-based modeling captures behavioral relationships for ease in modeling complex systems and physics-based models simulate transport and reaction behavior. This integrated approach constructs a digital twin capable of simulating quality parameters under various scenarios, like rainfall, effluent discharge, changing demographics, and climate. The study enhances understanding of river ecosystems and provides a tool for managing their ecological health. The river digital twin is developed considering river and its ecosystem with different inflows and outflows and is validated using a 480 km stretch of an India’s largest river, the Ganga. pdf
Data Science for Simulation Data Science for SimulationData-Driven Modeling and Simulation Frameworks Chair: Abdolreza Abhari
A Comprehensive Framework for Data-Driven Agent-Based Modeling Ruhollah Jamali (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Abstract AbstractIntegrating data-driven methodologies with agent-based simulation presents an opportunity to automate modeling and enable Digital Twins for complex systems. This integration allows for utilization of real-world data to extract models that update with changes in the corresponding real systems and enhance our abilities to make informed decisions. We were unable to identify a systematic approach for developing data-driven agent-based models beyond isolated attempts focused on specific aspects. In response, we reviewed existing literature to develop a framework that systematically approaches data-driven agent-based modeling. We believe that our framework can assist in systematically evaluating which parts of agent-based models' development processes can be data-driven. Furthermore, we provide a comprehensive exploration of data-driven methods that can be applied to each stage of the model development process. Finally, we utilize our prior works in this area to demonstrate the application of data-driven methodologies in capturing patterns and insights for model development. pdfData-driven Uncertainty Revenue Modeling for Computation Resource Allocation in Recommendation Systems Feixue Liu and Yuqing Miao (Tsinghua University); Yun Ye (Ant Group); and Peisong Wang, Xinglu Liu, Kai Zhang, and Wai kin (Victor) Chan (Tsinghua University) Tags: data driven, optimization, Python, student, industry Abstract AbstractIn recent years, as the resource consumption of computation-intensive recommendation systems(RS) significantly increased, and the supply of large-scale resources encountered bottleneck, computation resource allocation for improving computing efficiency grabbed the attention of the industry. The simulation of revenue is a focal point in the allocation problem. However, due to the complex engineering architecture of RS, no existing research proposes a simulation model that addresses the relationship between resource allocation strategies and benefits. This paper, based on real data from Alipay's advertising RS and integrating queuing theory, models the relationship from resource allocation decisions to revenue considering exposure randomness of traffic. We further merge allocation tasks with capacity planning(CP) to establish a two-stage joint optimization model and use the revenue model above as the objective. The proposed model outperforms the baseline with 1.9% in revenue, and represents flexible adaptation of exposure rate, providing insights for the simulation of industrial RS. pdf Data Science for SimulationMachine Learning and Data Utilization in Simulation Chair: Abdolreza Abhari
Towards New Simulation Models For DL-based Video Streaming In Edge Networks Abdolreza Abhari (Toronto Metropolitan University) Tags: data analytics, distribution-fitting, machine learning, Python, student Abstract AbstractTo evaluate novel solutions for edge computing systems, suitable distribution models for simulation are essential. The extensive use of deep learning (DL) in video analytics has altered traffic patterns on edge and cloud servers, necessitating innovative models. Queuing models are used to simulate the performance and stability of edge-enabled systems, particularly video streaming applications. This paper demonstrates that traditional Markovian M/M/s and general distribution G/G/s queuing models must be revamped for accurate simulation. We examined these queuing models by characterizing the real data with discrete and continuous distributions for arrival rates to homogenous servers in AI-based video analytics edge systems. Based on achieved results, traditional methods for finding general distributions are inadequate, and an automation method for finding empirical distribution is needed. Therefore, we introduce a novel approach using a generative adversarial network (WGAN) to generate artificial data to automate the process of estimating empirical distribution for modeling these applications. pdfEnhancing GPT-3.5’s Proficiency in NetLogo through Few-Shot Prompting and Retrieval-Augmented Generation Best Contributed Applied Paper - Finalist Joseph Martínez, Brian Llinas, Jhon Gregory Botello, Jose J. Padilla, and Erika Frydenlund (Virginia Modeling, Analysis, and Simulation Center) Tags: agent-based, machine learning, Netlogo, Python, student Abstract AbstractRecognizing the limited research on Large Language Models (LLMs) capabilities with low-resource languages, this study evaluates and increases the proficiency of the LLM GPT-3.5 in generating interface and procedural code elements for NetLogo, a multi-agent programming language and modeling environment. To achieve this, we employed “few-shot” prompting and Retrieval-Augmented Generation (RAG) methodologies using two manually created datasets, NetLogoEvalCode and NetLogoEvalInterface. The results demonstrate that GPT-3.5 can generate NetLogo elements and code procedures more effectively when provided with additional examples to learn from, highlighting the potential of LLMs in aiding the development of agent-based models (ABMs). On the other hand, the RAG model obtained a poor performance. We listed possible reasons for this result, which were aligned with RAG’s common challenges identified by the state-of-the-art. We propose future research directions for leveraging LLMs for simulation development and instructional purposes in the context of ABMs. pdfA Review of Trends and Practices in Using Visual Data for Construction-Related Machine Learning Models Abbas Mohammadi, SeyedeZahra Golazad, and Abbas Rashidi (University of Utah) Tags: data analytics, data driven, machine learning, construction Abstract AbstractThis paper systematically reviews image-based analysis in the construction industry, examining 136 articles through 2023. The findings reveal a marked increase in the use of machine learning (ML), deep learning (DL), and reinforcement learning (RL) models, which utilize image and video data to enhance worker safety, monitor construction progress, and improve project management. The study identifies a significant shift towards integrating real and synthetic data, enhancing model robustness. It also highlights the rising adoption of data-sharing practices, with an increase in publicly available datasets. However, the review highlights underexplored areas such as synthetic data use and advanced privacy-preserving methods. These gaps suggest opportunities for further research to leverage technology more effectively in the construction sector. pdf Data Science for SimulationAdvanced Forecasting and Simulation Techniques Chair: Abdolreza Abhari
Perceiving Copulas for Multimodal Time Series Forecasting Cat Phuoc Le, Chris Cannella, Ali Hasan, Yuting Ng, and Vahid Tarokh (Duke University) Tags: big data, machine learning, Python, student Abstract AbstractTransformers have demonstrated remarkable efficacy in forecasting time series. However, their dependence on self-attention mechanisms demands significant computational resources, thereby limiting their applicability across diverse tasks. Here, we propose the perceiver-CDF for modeling cumulative distribution functions (CDF) of time series. Our model combines the perceiver architecture with copula-based attention for multimodal time series prediction. By leveraging the perceiver, our model transforms multimodal data into a compact latent space, thereby significantly reducing computational demands. Subsequently, we implement copula-based attention to construct the joint distribution of missing data for future prediction. To mitigate error propagation and enhance efficiency, we introduce output variance testing and midpoint inference for the local attention mechanism. This enables the model to efficiently capture dependencies within nearby imputed samples without considering all previous samples. The experiments on the various benchmarks demonstrate a consistent improvement over other methods while utilizing only half of the resources. pdfScorigami: Simulating the Distribution and Assessing the Rarity of National Football League Scores Liam Moyer (Bucknell University), Jameson Railey (Stevens Institute of Technology), Andrew Daw (University of Southern California), and Samuel C. Gutekunst (Bucknell University) Tags: data analytics, data driven, conceptual modeling Abstract AbstractNFL Scorigamis have a cult-like following, and occur whenever a game ends in a new, never-before-seen score. While substantial research has gone into simulating and predicting NFL game scores, most work has been in relation to winner prediction and betting spreads. We analyze a Poisson random variable model for the distribution of NFL game scores and show that it fails to incorporate important game dynamics. Through an analysis of extensive play-by-play data, we extend this to a non-stationary, state-dependent Poisson process model. This latter model more closely fits real NFL score data, and we use it in NFL score simulations to forecast likely future Scorigamis. pdfAdvanced Solar Power Forecasting: a Hybrid/ensemble Approach Utilizing Geographic and Meteorological Data Mahdi Darvishi and Abdolreza Abhari (Toronto Metropolitan University) Tags: big data, environment, Python, student Abstract AbstractPhotovoltaic (PV) systems are pivotal in the global energy transition, where accurate solar power forecasting is critical. Traditional forecasting has leaned heavily on solar irradiance data, yet such reliance carries inherent uncertainties and measurement complexities, presenting significant forecasting challenges. This paper introduces a novel hybrid/ensemble model that reduces dependence on solar irradiance data, utilizing geographic, meteorological, and temporal data to predict solar power output. Combining the strengths of XGBoost and LightGBM algorithms through a linear regression meta-model, our approach demonstrates improved prediction accuracy, evidenced by a mean absolute error (MAE) of 0.033, and an R-squared value of 0.693. This study advances solar power forecasting, enhancing PV system efficiency, and reliability, and promoting sustainable energy investments. pdf Data Science for Simulation, Simulation as Digital Twin, Simulation OptimizationCross-Track Session 2: Methods Chair: Jinbo Zhao (Texas A&M University)
Simulation Optimization with Non-Stationary Streaming Input Data Songhao Wang (Southern University of Science and Technology), Haowei Wang (Rice-Rick Digitalization PTE. Ltd.), Jianglin Xia (Southern University of Science and Technology), and Xiuqin Xu (McKinsey & Company) Tags: data driven, input modeling, optimization, student Abstract AbstractSimulation optimization has become an emerging tool to design and analysis of real-world systems. In stochastic simulation, input distribution is a main driving force to account for system randomness. Most existing works on input modeling focus on stationary input distributions. In reality, however, input distributions could experience sudden disruptive changes due to external factors. In this work, we consider input modeling through non-stationary streaming input data, where the input data arrive sequentially across different decision stages. Both the parameters of the input distributions and the disruptive change points are unknown. We use a Markov Switching Model to estimate the non-stationary input distributions, and design a metamodel-based approach to solve the following optimization problem. The proposed metamodel and optimization algorithm can utilize the simulation results from all the past stages. A numerical study on an inventory system shows that our algorithm can solve the problem more efficiently compared to common approaches. pdfCatalyzing Intelligent Logistics System Simulation with Data-driven Decision Strategies Shiqi Hao, Yang Liu, Yu Wang, Xiaopeng Huang, Muchuan Zhao, and Xiaotian Zhuang (JD Logistics, Inc.) Tags: machine learning, logistics, Java, Python, industry Abstract AbstractMachine learning is becoming an important technique in modern simulation systems due to the strong capability on capturing the random, complex, and dynamic features of the physical world. Based on these advantages, it has been employed as a powerful tool that enables the intelligent simulation of large-scale logistics systems in a highly efficient manner. Inspired by these applications, this work presents a new paradigm, where machine learning is utilized to generate data-driven decision strategies to accurately emulate the practical operations in logistics systems, and improve the simulation accuracy. Compared with existing approaches, the proposed method is also characterized by the high flexibility and transparency. Consequently, it can adapt to a large variety of logistics system architectures, and catch adequate details of system dynamics. Experiments have been conducted based on the simulation of large-scale real-world logistics systems, where the proposed method demonstrates superior accuracy on both strategy learning and simulation. pdfA Framework for Digital Twin Collaboration Zhengchang Hua (Southern University of Science and Technology, University of Leeds); Karim Djemame (University of Leeds); Nikos Tziritas (University of Thessaly); and Georgios Theodoropoulos (Southern University of Science and Technology) Tags: agent-based, distributed, complex systems, digital twin, student Abstract AbstractDigital Twins (DTs) have emerged as a powerful tool for modeling Large Complex Systems (LCSs). Their strength lies in the detailed virtual models that enable accurate predictions, presenting challenges in traditionally centralized approaches due to the immense scale and decentralized ownership of LCSs. This paper proposes a framework that leverages the prevalence of individual DTs within LCSs. By facilitating the exchange of decisions and predictions, this framework fosters collaboration among autonomous DTs, enhancing performance. Additionally, a trust-based mechanism is introduced to improve system robustness against poor decision-making within the collaborative network. The framework's effectiveness is demonstrated in a virtual power plant (VPP) scenario. The evaluation results confirm the system’s objectives across various test cases and show scalability for large deployments. pdf
Environment Sustainability and Resilience Environment Sustainability and Resilience, Simulation in EducationDigital Twins in Education and Energy Chair: Daniel Jun Chung Hii (Kajima Corporation, Kajima Technical Research Institute Singapore)
Picking System Digital Twin: A Lab-based Case Study Vicky Sipasseuth and Michael E. Kuhl (Rochester Institute of Technology) Abstract AbstractDigital Twins have become a focal point of simulation modeling and analysis in recent years and seems to be gaining momentum. As such, there is a need to more fully integrate practical digital twin modeling and analysis into systems simulation courses. In this paper, we present a lab-based digital twin case study of a pick to light picking system. The digital twin design framework and methodology utilize a simulation model that acts a virtual near real-time representation of a physical picking system. The digital twin can be used to analyze picking system configurations such as alternative picking policies, inventory policies, worker allocation to picking zones, and related decisions. pdfNovel Methods for Teaching Simulation: Strengthening Digital Twin Development Amel Jaoua (National Engineering School of Tunis), Elisa Negri (Politecnico di Milano), and Mehdi Jaoua and Nabil Benkirane (Independent Researcher) Abstract AbstractThis article proposes new methods for teaching Discrete Event Simulation (DES) in manufacturing systems.
Over the last four decades, numerous books have offered methods for teaching DES as what-if analysis tools for addressing stochastic problems. However, the emergence of the Digital Twin (DT) concept has posed challenges for such traditionally designed DES models. These models often struggle to evolve effectively into Real-Time Simulators (RTS). RTS are connected DES models embedded as kernels in the DT framework and synchronized based on real-time sensor data streams. Thus, the objective of this work is to introduce teaching methods that provide deeper insights into designing the needed high-fidelity DES models capable of evolving into RTS. It also illustrates how the Immersive Learning approach is employed to immerse students in a manufacturing environment through Virtual Reality (VR) experiences, allowing them to grasp key concepts such as granularity levels and synchronization challenges in deploying a DT pdfTowards The Digital Twinning and Simulation of a Smart Building for Well-Being Daniel Jun Chung Hii and Takamasa Hasama (Kajima Technical Research Institute, Singapore, Kajima Corporation) Tags: agent-based, digital twin, project management, AnyLogic, industry Abstract AbstractSmart cities and buildings are enabled by the Internet of Things (IoT) sensor infrastructures integration and monitoring in the current Industry 5.0 era. Environment sensing, people and robots counting enable the simulations of both digital twin and agent-based modelling (ABM). This enables the understanding of the impact between the built environment and humans. The movement analysis allows planning and design of spaces as well as utilizing machine learning methods to train the trajectories for space usage prediction. Huma behavior and social interaction comprehension is important to generate people friendly spaces. The GEAR is a smart, green, and WELL certified building embedded with sensors as a living lab for research and development. The diverse workspace layouts create an ideal testbed to study human and built environment interactions. This pursuit is to achieve more sustainable and better designed spaces for human well-being in the fast-changing world. pdf Environment Sustainability and ResilienceDigital Twins and Smart Energy Chair: Daniel René Bayer (University of Würzburg)
Analyzing the Impact of Electric Vehicles on Local Energy Systems using Digital Twins Daniel René Bayer and Marco Pruckner (University of Würzburg) Abstract AbstractThe electrification of the transportation and heating sector, the so-called sector coupling, is one of the core elements to achieve independence from fossil fuels. As it highly affects the electricity demand, especially on the local level, the integrated modeling and simulation of all sectors is a promising approach for analyzing design decisions or complex control strategies. This paper analyzes the increase in electricity demand resulting from sector coupling, mainly due to integrating electric vehicles into urban energy systems. Therefore, we utilize a digital twin of an existing local energy system and extend it with a mobility simulation model to evaluate the impact of electric vehicles on the distribution grid level. Our findings indicate a significant rise in annual electricity consumption attributed to electric vehicles, with home charging alone resulting in a 78% increase. However, we demonstrate that integrating photovoltaic and battery energy storage systems can effectively mitigate this rise. pdfEvaluating the Impact of Urban Form Evolution on Urban Energy Performance and Renewable Energy Potential using Agent-Based Modeling Osama Mussawar (Khalifa University), Elin Markarian (Carleton University), Ahmad Mayyas (Khalifa University), and Elie Azar (Carleton University) Abstract AbstractThis paper presents an agent-based modeling framework to quantify the impact of evolving urban forms on urban energy performance and renewable energy potential. The framework leverages energy modeling tools, large public datasets, and urban form classifications to assess urban performance along various techno-economic metrics (e.g., self-sufficiency and energy costs). A case study of the historical evolution of the urban form of the city of Toronto, Canada, is presented, focusing on the transition from large low-rise to open high-rise urban forms. Results show that increasing the proportion of open high-rise areas from 0 to 25% increased the net energy ratio from 22% to 72%, implying higher reliance on the grid to match demand. While a denser urban form challenges energy self-sufficiency and net-zero emissions goals, aggregating buildings’ energy demand and supply at the community level has improved self-sufficiency levels, offering a promising avenue for future urban energy planning and policy efforts. pdfDynamic Load Usage Behavior Simulation in Smart Grids: A Data-Driven Approach in Urban Buildings Shuang Dai and Fanlin Meng (University of Exeter) Tags: big data, data analytics, data driven, machine learning Abstract AbstractSmart grids are essential for sustainable urban energy systems, improving efficiency and integrating renewable sources. Accurately forecasting load demand is key for effective management, but is challenging due to unpredictable behaviors and dynamic consumption patterns. This paper introduces a new data-driven approach using smart meter data from various buildings in Cardiff, UK to better understand electricity consumption behaviors across seasons. Our methodology combines machine learning techniques with an in-depth analysis of physical building characteristics to conduct dynamic load usage behavior simulation. We employ consensus-based clustering to identify buildings with similar consumption behaviors and track dynamic changes in load usage over time. Furthermore, we identify key load-related features that influence consumption patterns, enhancing the precision of load demand forecasting. Empirical validation of our approach underscores its effectiveness in enhancing forecast accuracy and providing robust, sustainable strategies for energy management within the smart grid paradigm. pdf Environment Sustainability and ResilienceResilience of Critical Infrastructures Chair: Elie Azar (Carleton University)
Optimizing Cyber-Resilience in Critical Infrastructure Networks Ranjan Pal (MIT Sloan School of Management), Rohan Xavier Sequeira (University of Southern California), and Sander Zeijlemaker and Michael Siegel (MIT Sloan School of Management) Tags: Monte Carlo, optimization, complex systems, cybersecurity, resiliency Abstract AbstractWith the expanding cyber-risk terrain spanning business processes in digitally driven enterprises with critical infrastructure, it is inevitable in time that system process continuity (SPC) will be affected (e.g., via ransomware) for certain inter-dependent processes of such an enterprise, and hamper business continuity. We are interested in the question: how should managers of such enterprises optimize cyber-resilience (i.e., the ability to maintain SPC via absorbing and adapting to an adverse cyber-incident) for any complex networked critical infrastructure (CI) (sub-)system with multiple process functionality components (PFCs)? We prove via an algorithmic graph-theoretic approach that optimizing or approximately optimizing cyber-resilience within a pre-specified enterprise cyber-protection budget in any CI with networked and inter-dependent PFCs is NP-hard. Consequently, we propose a computationally tractable graph-based Monte-Carlo simulation framework to `optimize' (boost) cyber-resilience within any PFC network by allocating a constrained cyber-protection budget among PFCs in accordance with their Katz centralities in the PFC network. pdfHybrid Simulation and Reinforcement Learning-Based Scheduling for Resilient Infrastructure Networks Best Contributed Applied Paper - Finalist Pavithra Sripathanallur Murali and Shima Mohebbi (George Mason University) Tags: agent-based, machine learning, resiliency, AnyLogic, student Abstract AbstractInfrastructure systems are interdependent at various levels, and their collective performance is influenced by factors such as topology, budgetary decisions, resource availability, and awareness of interdependency. Traditional resource allocation models for improving resilience often assume a single decision-maker overseeing all scheduling decisions. However, critical infrastructures, characterized by a network-ofnetworks structure, are managed by individual entities with distinct boundaries. Moreover, the dynamic and stochastic nature of decision-making processes cannot always be captured via mathematical programming. This study develops a hybrid simulation model that merges top-down and bottom-up approaches. It captures organizational-level budgetary decision-making dynamics through system dynamics, and maintenance activities alongside evolving network performance through an agent-based model. Optimal restoration strategies maximizing network resilience are identified via deep reinforcement learning, constrained by financial allocations. This approach is applied to water distribution and mobility networks in Tampa, FL demonstrating our method’s efficacy for restoring interdependent infrastructures. pdfInforming Building Retrofits using Surrogates of Physics-based Simulation Models: A Comparison of Multi-Objective Optimization Algorithms Seif Qiblawi and Elin Markarian (Carleton University); Shivram Krishnan, Anagha Divakaran, and Albert Thomas (Indian Institute of Technology Bombay); and Elie Azar (Carleton University) Tags: machine learning, optimization, Python, student Abstract AbstractSurrogate models are increasingly used to reduce the computational costs of building performance simulation (BPS) models. However, they are rarely coupled with optimization algorithms to inform retrofitting decisions. The goal of this paper is to compare the performance of several commonly used meta-heuristic algorithms on a machine learning (ML) surrogate of a BPS model to better shape best practices in future work on surrogate model optimization. A surrogate model with an hourly prediction scheme is developed, and retrofits are optimized to minimize annual electric and natural gas loads, as well as PMVSET as a measure of discomfort. This study incorporates exploratory landscape analysis (ELA) to better understand the problem, then separately applies four optimization algorithms (random search, OMOPSO, NSGA-II, AMOSA). NSGA-II is the best-performing algorithm, finding 74% of the final set of Pareto efficient points and converging in 9% of the time required by the second fastest algorithm. pdf Environment Sustainability and ResilienceSimulation and Environmental Management Chair: Pavithra Sripathanallur Murali (George Mason University)
Mixed Energy and Production Scheduling in an ECO-Industrial Park Shufang Xie, Tao Zhang, Oliver Rose, and Tobis Uhlig (University of the Bundeswehr Munich) and Björn Vollack (Dresden University of Technology) Tags: discrete-event, optimization, environment, manufacturing, AnyLogic Abstract AbstractIn recent times, eco-industrial parks (EIP) have taken on a significant role in addressing environmental challenges and supporting sustainable practices. To enhance the efficient utilization of energy within the park and ensure the achievement of production targets among its members, the implementation of energy and production scheduling is imperative. This paper addresses this optimization challenge by formulating it into a constraint programming (CP) model. The optimized scheduling solutions generated by the CP model will be directly used to guide the operations within the simulation environment of the EIP. The paper compares the CP results with the results of another method which we developed in our previous research stage. The outcomes of CP solver demonstrate its effectiveness in optimizing energy utilization and meeting production targets. This research contributes to developing a practical decision support system for resolving real-life scheduling problems in industry parks. pdfA Lake Cyanobacteria Colony Dynamics Simulation supported by SPH. Samuel Ferrero Losada, José Luis Risco Martín, José Antonio López Orozco, and Eva Besada Portas (Complutense University of Madrid) Tags: DEVS, system dynamics, digital twin, environment, student Abstract AbstractAddressing the prediction of Harmful Cyanobacterial Blooms (CyanoHABs) is critical due to the increasing strain on global water resources from climate change and overexploitation. This paper introduces a combined physical and biological modeling approach for simulating the 3D migration, growth, and decay of cyanobacteria colonies in water bodies. Utilizing the Smoothed Particle Hydrodynamics (SPH) method, our model accommodates complex geometries with high accuracy. This adaptable open-source framework significantly enhances the simulation of cyanobacteria migration in three dimensions, a capability often lacking in existing environmental lake simulators. Moreover, it is being integrated into an advanced early warning system, representing a digital twin of the water body. This integration aims to improve the prediction and management of CyanoHABs, contributing to safeguard water quality and ecosystem health. pdfModeling Pathways to the Emission Trading System: Policy Recommendations for the Uae Nishant Bhattarai, Rahul Rajeevkumar Urs, Toufic Mezher, and Ahmad Mayyas (Khalifa University) Tags: discrete-event, conceptual modeling, environment, Matlab, student Abstract AbstractThis study leverages a meticulously developed model to explore the profound implications of Emission Trading Systems (ETS) on environmental and economic dimensions in the United Arab Emirates. This model delves into critical parameters, including emission caps and carbon prices, yielding transformative insights into environmental and economic avenues. The results reveal a direct correlation between the stringency of emission caps and emission reduction, particularly impactful in the power generation sector. Conversely, challenges faced by energy-intensive industries prompt discussions on innovative strategies such as alternative fuels and materials. The trading dynamics among participants, portrayed through a randomized auction process, revealed nuanced patterns and highlighted the challenges and opportunities within the emission trading system. This model provides a comprehensive understanding of the complexities inherent in the ETS and can serve as a strategic guide for policymakers and stakeholders, fostering a balanced approach between environmental and economic factors. pdf
Healthcare and Life Sciences Track Coordinator - Healthcare and Life Sciences: Bjorn Berg (University of Minnesota), Tugce Martagan (Eindhoven University of Technology), Varun Ramamohan (Indian Institute of Technology Delhi, Department of Mechanical Engineering) Healthcare and Life SciencesHealthcare Facility Planning Chair: Jamol Pender (Cornell University)
A Process Simulation Model for a Histopathology Laboratory Yin-Chi Chan and Anandarup Mukherjee (University of Cambridge), Nicola Moretti (University College London), Momoko Nakaoka and Jorge Merino (University of Cambridge), Zahrah Rosun and Colin Carr (Cambridge University Hospitals), and Duncan McFarlane and Ajith Kumar Parlikad (University of Cambridge) Tags: discrete-event, healthcare, Arena Abstract AbstractCurrently, although simulation models considering pathology departments exist, either in isolation or as part of a larger hospital model, they generally do not consider the individual steps within the histopathology process itself, instead adopting a high-level view of the histopathology laboratory. This prevents the study of policies that can improve efficiency, reduce the laboratory turnaround time (TAT), and/or reduce the staffing costs required to achieve a certain level of laboratory throughput. In this paper, we consider a discrete-event simulation model for a histopathology department at a hospital in the East of England, UK. Our model captures the histopathology department with a higher level of detail than currently exists in the literature, and address some specific simulation challenges that arise from such a modeling approach. We then demonstrate how our simulation model can be used to answer various management questions regarding staffing levels, TAT and the trade-offs between them. pdfNavigating Complexity: Challenges in Developing Simulation Models for Sterile Processing Sayed Rezwanul Islam and Kevin Taaffe (Clemson University), Gabriel Segarra (Medical University of South Carolina), Sudeep Hegde and Lawrence Fredendall (Clemson University), and Niles Goodfellow and Kenneth Catchpole (Medical University of South Carolina) Abstract AbstractThe Sterile Processing Department (SPD) is an essential component of hospitals and healthcare facilities. It ensures the cleanliness, sterility, and proper functioning of medical instruments. A well-designed SPD workflow can improve productivity, reduce operating room (OR) delays, and enhance patient safety. Developing a simulation model for sterile processing is challenging because of the complex interactions between different units, such as decontamination, assembly, sterilization, storage, and case-cart preparation. Moreover, the dynamic nature of the surgical volume, tray requirements, and staffing dynamics further complicate the modeling process. Furthermore, missing instruments, bioburden, and nonfunctional instruments add another layer of complexity. Therefore, the additional tray request from OR personnel results in undue strain on the SPD’s inventory of trays and its ability to process dirty trays and instruments. This study focuses on the following challenges: duplicate tray requests, replacement tray needs, tray representation, on-time start pressures, and staff shortages. pdfPhysician Staffing in Telemedicine: A Simulation-Based Approach for a Network of CVS Minute Clinics Shuwen Lu, Mark E. Lewis, and Jamol Pender (Cornell University) Tags: discrete-event, Monte Carlo, complex systems, healthcare, Python Abstract AbstractTelemedicine has seen rapid expansion partially because of the COVID-19 pandemic, which helped to reduce regulation regarding telemedicine availability. In particular, we are inspired by the minute-clinic model of CVS-Aetna, and in this paper, we build a minute-clinic simulation model to understand the proportion of time with which a nurse practitioner can obtain additional medical advice from a collaborating physician.
We use this simulation to construct staffing policies for collaborating physicians to address staffing issues nationally. We compare our simulated staffing schedules with those from Gaussian and Binomial approximations and assess their quality. Finally, we develop staffing procedures when the physicians are restricted by state regulations and compare them to situations where there are no restrictions to understand the impact of the regulations. pdf Healthcare and Life SciencesSimulation Modeling for Emergency Care Processes Chair: Chang-Yan Shih (Tsinghua-Berkeley Shenzhen Institute)
Simulation of Emergency Care Systems: A Taxonomy and Future Directions Sean Shao Wei Lam (Singapore Health Services); Ashish Kumar (Singapore Health Services, Duke-NUS Medical School); Yuan Guo (WS Audiology); Michael Dorosan (Singapore Health Services); and Marcus Eng Hock Ong and Fahad Javaid Siddiqui (Duke-NUS Medical School) Tags: agent-based, discrete-event, system dynamics, healthcare Abstract AbstractThe Emergency Care System (ECS), comprising pre-hospital and in-hospital sub-systems, is a vital health infrastructure for round-the-clock life-saving care. The ECS is one of the most modelled parts of the health system. In the process of a comprehensive ongoing scoping review of ECS simulation using the methods of Discrete Event Simulation (DES), System Dynamics (SD), and Agent Based Modeling (ABM), we propose a method-agnostic simulation process framework, and use this framework to suggest Research Questions (RQs) that will provide a state-of-the-art view of ECS simulation. Our RQs and the resulting encoding approach guide us to a taxonomy of ECS simulation. This taxonomy addresses potential gaps in understanding the systemic nature of the ECS and its linkages to other parts of the health and social services system. It also helps to conceptualize the attributes of ECS simulation models that will help them to evolve to Digital Twins. pdfCross-Training Policies for Enhanced Resilience in Emergency Departments Moustafa Abdelwanis, Eman Ouda, Andrei Sleptchenko, Adriana Gabor, Mecit Simsekler, and Mohammed Omar (Khalifa University) Tags: discrete-event, healthcare, resiliency, AnyLogic, student Abstract AbstractThis paper investigates cross-training policies to enhance emergency department resilience in managing patient demand surges. We evaluated operational performance under varying patient flows using a simulation model based on an emergency department in Abu Dhabi, UAE. Results demonstrated a significant increase in patient length of stay in the triage room due to a shortage of available nurses during surges. Conversely, nurses in other areas (adult zone, pediatrics, and fast-track) were less affected. Two cross-training policies were investigated: pooling nurses from the triage and adult zones and expanding the pool to include the pediatrics section. The first policy reduced patient length of stay in the triage by 91.71%. The second policy further improved flexibility, leading to reductions in length of stay. Additional experiments revealed the limitations of the second policy when subjected to higher surge levels; therefore, further research is needed to explore a broader range of policies and contexts. pdfAn Integrated Simulation Platform for Cardiac Arrest Response System Chang-Yan Shih, Kexin Cao, Xinglu Liu, Xizi Qiao, and Wai Kin Chan (Tsinghua University) Tags: agent-based, complex systems, healthcare, AnyLogic, student Abstract AbstractIn response to China's over 700,000 annual out-of-hospital cardiac arrests (OHCA), this paper presents a novel simulation framework integrating Geographic Information System, event occurrence probability models, and an agent-based model to efficiently formulate, optimize, simulate, verify, and analyze emergency response efficiency in cities. The framework supports large-scale, long-term city-level optimization and simulation, allowing for the evaluation of various dispatch algorithms, and deployment mechanisms. We offer insights into emergency first responder system design in Shenzhen, highlighting the significant impact of dispatch range, responder quantity, and skills ratio on survival rates. The experimental results indicate the maximum effective dispatch range of current dispatch strategy is 800 meters. As number of responders is less than 100, prioritizing an increase can significantly improve survival rates, with a maximum rise of 148\%. However, when it exceeds 100, the focus should shift to augmenting the proportion of skilled responders, followed by the ratio of mobile responders. pdf Healthcare and Life SciencesQueueing-based Patient Flow Analyses Chair: Laura Boyle (Queen's University Belfast)
Dependence between Arrival and Service Processes in Healthcare Simulation Modelling Laura Boyle (Queen's University Belfast) and Nigel Bean (University of Adelaide) Tags: discrete-event, complex systems, healthcare, R Abstract AbstractAnalytical and simulated queuing models are useful for for understanding and improving healthcare systems, including emergency departments (EDs), which serve as vital access points internationally. EDs face challenges such as long waiting times, ambulance queuing, and bed-blocking, and simulation can be used to test strategies for improving these problems. Despite the widespread use of simulation, there is a lack of literature addressing the validity of queuing theory assumptions underpinning it, particularly the assumption of independence between the arrival and service processes. This paper employs semi-experiments to test this assumptions using real ED data. The results indicate that a correlation structure is present between the arrival and service processes in this data. The implications for simulation studies and directions for future work are discussed. pdfTesting Facility Location With Constrained Queue Time Problem: A Case Study in Florida, USA Almir Antonio Monteiro Junior, Rafael Schneider, Vanessa de Almeida Guimarães, and Pedro Henrique González (Federal University of Rio de Janeiro) Tags: Monte Carlo, optimization, healthcare, Python, student Abstract AbstractThis study addresses the Testing Facility Location with Constrained Queue Time Problem. This optimization problem focuses on determining the best places to deploy testing sites and their available testers for infectious diseases, while constraining the maximum time in the queue with a given probability. An integer programming model is introduced and applied to the three biggest counties, in terms of population, of Florida, United States. Moreover, the Monte Carlo method is used to evaluate the model's output, aiming to check if the queueing time constraint is being satisfied. Through the experiments, a testing facility deployment plan can be determined for each county and further validated by the simulation. The results show that the solutions returned by the model behaved successfully when submitted to the Monte Carlo method, not exceeding the time in the queue in more than the predefined probability. pdf Agent-Based Simulation, Healthcare and Life Sciences, Hybrid SimulationCross-Track Session 1: Applications Chair: Burla Ondes (Purdue University)
Enhancing Forced Displacement Simulations: Integrating Health Facilities for Automatically Generated Routes Networks Alireza Jahani, Maziar Ghorbani, Diana Suleimenova, Yani Xue, and Derek Groen (Brunel University London) Tags: agent-based, optimization, healthcare, logistics, transportation Abstract AbstractThis paper introduces a novel approach to supporting healthcare accessibility for refugees during their movement to camps in regions with limited infrastructure. We achieve this by integrating the density of healthcare facilities into route networks created by customized pruning algorithms. Through rigorous data analysis and algorithm development, our research aims to optimize healthcare delivery routes and enhance healthcare outcomes for displaced populations. Our findings highlight Visit Tracking route pruning as the most effective method, with an Averaged Relative Difference (ARD) of 0.3837. Particularly in scenarios involving healthcare facility integration, this method outperforms others, including the manual extracted route network (0.4902), Direct Distance pruning (0.3912), Triangle pruning (0.3913), and Sequential Distance pruning (0.7846). Despite the inherent limitations of our proposed method, such as data availability and computational complexity, these quantifiable results underscore its potential contributions to healthcare planning, policy development, and humanitarian assistance efforts worldwide. pdfPatient Assignment and Prioritization for Multi-Stage Care with Reentrance Wei Liu, Mengshi Lu, and Pengyi Shi (Purdue University) Tags: discrete-event, healthcare, Matlab Abstract AbstractIn this paper, we study a queueing model that incorporates patient reentrance to reflect patients' recurring requests for nurse care and their rest periods between these requests. Within this framework, we address two levels of decision-making: the priority discipline decision for each nurse and the nurse-patient assignment problem. We introduce the shortest-first and longest-first rules in the priority discipline decision problem and show the condition under which each policy excels through theoretical analysis and comprehensive simulations. For the nurse-patient assignment problem, we propose two heuristic policies. We show that the policy maximizing the immediate decrease in holding costs outperforms the alternative policy, which considers the long-term aggregate holding cost. Additionally, both proposed policies significantly surpass the benchmark policy, which does not utilize queue length information. pdfSimulation-based Optimization for Large-scale Perishable Agri-food Cold Chain in Rwanda: Agent-based Modeling Approach Aghdas Badiee, Adam Gripton, and Philip Greening (Heriot-Watt University) and Toby Peters (University of Birmingham) Tags: agent-based, optimization, supply chain, AnyLogic, Python Abstract AbstractThe global food supply chain faces significant challenges in maintaining the quality and safety of perishable agri-food products. This study introduces a novel approach to demonstrate the efficiency of using the perishable agri-food cold supply chain (FCC) by integrating optimization techniques and agent-based modeling (ABM) simulation. Addressing complexities and challenges such as precise temperature control, emission reduction, waste minimization, and finding the best implementation of cold chain infrastructure, the research applies ABM to model dynamic interactions within the FCC. By testing thousands of simulation scenarios in AnyLogic, the paper demonstrates how the proposed model can support strategic decision-making, demonstrate potential export levels, assess crop quality over time, and evaluate waste reduction compared to non-cold chain scenarios. The research further discusses the implementation of the proposed model in a real case study in Rwanda, Africa, showcasing its contribution to optimizing configuration, reducing food loss and CO2 emissions. pdf Healthcare and Life SciencesClinical and Disease Pathway Modeling Chair: Ana Lucia Tula (Ecole des Mines de Saint-Etienne)
Bayesian Optimization for Clinical Pathway Decomposition from Aggregate Data William Plumb, Alex Bottle, and Giuliano Casale (Imperial College London) Tags: machine learning, optimization, healthcare Abstract AbstractData protection rules often impose anonymization requirements on datasets by means of aggregations that hinder the exact simulation of individual subjects. For example, clinical pathways that disclose medical conditions of patients may typically need to be aggregated to preserve anonymity of the subjects. However, aggregation unavoidably results in biasing the simulation process, for example, by introducing spurious pathways that can skew the simulated trajectories. In this paper, we study this problem and develop approximate decomposition methods that mitigate its impact. Our method is shown to produce from the raw aggregates pathways with higher fidelity than sampling a Markov chain model of the aggregate data, even preserving the same length of the original pathways. In particular, we observe a relative increase in average cosine similarity of up to 52% with respect to the true pathways compared with aggregate Markov chain sampling. pdfAutomatic Population-based Responsibility Modeling Using Process Mining: Application to Chronic Obstructive Pulmonary Disease Ana Lucia Tula, Vincent Augusto, Xavier Boucher, and Marianne Sarazin (Ecole des Mines de Saint-Etienne) Tags: agent-based, process mining, healthcare, AnyLogic, Python Abstract AbstractPopulation-based responsibility pursues three objectives: better health and better care at a better cost. The project aims to apply this paradigm using a process mining approach to build the clinical pathway of the population suffering certain disease to finally test if the process model well represents its clinical pathway. We asses our approach on a cohort of patients affected with Chronic Obstructive Pulmonary Disease. We use a national medico-administrative database of hospitalizations to extract our population, we stratify the disease and apply process mining. We propose different models with different rules to extract event logs and a design of experiments to compare the models using quantitative indicators: fitness, precision, generalization, simplicity and replicability through simulation. We also propose a qualitative evaluation of the best models following medical expert opinion. Our approach confirms that the models well represent the medical records and the simulation partially replicates them. pdf Healthcare and Life SciencesAgent-Based Modeling in Healthcare - I Chair: Vishnunarayan Girishan Prabhu (University of Central Florida)
Modeling the Lifelong Impact of Changes in Physical Activity Behavior on Non-Communicable Disease Events as a Result of the UK Covid-19 Lockdown Kate Mintram, Bhargavi Gottimukkala, and Anastasia Anagnostou (Brunel University London) Tags: agent-based, COVID, Java Abstract AbstractThe risk of developing non-communicable diseases (NCDs) is inextricably linked to the level of physical activity undertaken by individuals. The Covid-19 lockdown caused a shift in physical activity behaviors among the UK population. This study used an agent-based simulation to predict the impact of the reduction in physical activity caused by the UK lockdown, quantified using data from smartphone tracked activity, on the number of annual NCD occurrences over the lifetime of a cohort. The model considers an individual’s characteristics and health status to predict the risk of developing type 2 diabetes (T2D), cardiovascular disease (CVD), depression and musculoskeletal injuries (MSI). When physical activity was reduced as a result of the lockdown, the model showed an increase in the number of T2D, CVD and depression events and a decrease in the number of MSI events, over the short-term, but the number of incidences recovered over the lifetime of the cohort. pdfInvestigating the Impact of Pandemic on the Perioperative Healthcare Workers Availability: An Agent-Based Approach Shweta Prasad (University of North Carolina at Charlotte), Vishnunarayan Girishan Prabhu (University of Central Florida), and William Hand (PRISMA Health - Upstate) Abstract AbstractProtecting healthcare workers (HCWs) during a pandemic is critical to provide timely medical care for patients. Although prior studies have investigated HCW unavailability during the COVID-19 pandemic, the studies have not investigated the impact of parameters such as patient census, vaccination rates, transmission rates, and multiple hospital locations on HCW availability. This research considers a high-risk HCW group of perioperative staff to investigate the impact of segregating and rotating staffing policies on HCW unavailability during a pandemic in a health system with multiple locations. An agent-based model with a SEIR compartmental model was developed to simulate various scenarios. Simulated findings indicate that segregating and rotating policies significantly (p-value < 0.01) reduced the peak weekly unavailability of HCWs and the total percentage of HCWs getting infected by as much as 25% and 60% when vaccination rates were lower (<75%). However, these benefits diminished when the vaccination rates increased to 75%. pdfAn Agent-based Model to Assess Interventions for Continuous Care of Cardiovascular Diseases After Natural Disasters Faria Farzana and Eduardo Perez (Texas State University) Tags: agent-based, healthcare, AnyLogic, student Abstract AbstractCardiovascular disease (CVD) is contributing significantly to rising death rates in the U.S. Furthermore, areas susceptible to natural disasters face more challenges. When a disaster strikes, a part of the population seeks refuge in shelters where access to essential treatments is limited. The limitation of treatments results in a higher mortality rate among individuals with CVD. In response, this research presents an agent-based model to explore the repercussions of CVD patients lacking access to essential treatment. The model is built to represent the potential impact on health outcomes of individuals with CVD conditions that might be relocated to shelters during a hurricane event. The simulation results show an average 14% rise in CVD mortality after hurricane occurrences which approximately represent the rates observed from hurricane events in Texas. The model is an instrument to forecast long-term health outcomes and to plan for public health interventions associated with disaster relief. pdf Healthcare and Life SciencesTransplantation/Transfusion & Personalized Medicine Chair: Wesley Marrero (Thayer School of Engineering at Dartmouth)
A Kidney Paired Donation Program Simulation Zhenyu Yue and Michael C. Fu (University Of Maryland, College Park) and Hadi El-Amine, Jie Xu, and Chun-Hung Chen (George Mason University) Tags: discrete-event, healthcare, student Abstract AbstractThe Organ Procurement and Transplantation Network (OPTN) Kidney Paired Donation (KPD) Program, managed by the United Network for Organ Sharing (UNOS), supports patients with end-stage renal disease who have incompatible living donors. By widening the donor pool through recipient-donor pair exchanges, KPD increases the chance of finding compatible donors. We present a simulation model that combines KPD with desensitization treatments to assess the risks and benefits for matched donors and patients. The simulation model incorporates combinatorial optimization algorithms to match patients with the most compatible donors, considering potential donation declines and simulating the outcomes of successful matches. The simulation output includes patient longevity and desensitization-related adverse effects to help analyze the benefits of the KPD program for those patients facing donor incompatibility. pdfOptimization of Extended Red Blood Cell Matching in Transfusion Dependent Sickle Cell Patients Folarin B. Oyebolu and Marie Chion (University of Cambridge), Merel L. Wemelsfelder (Sanquin Research), Sara Trompeter (University College Hospitals NHS Foundation Trust), and Nicholas Gleadall and William J. Astle (University of Cambridge) Tags: discrete-event, optimization, healthcare, supply chain, Python Abstract AbstractBlood transfusion is a life-saving treatment for people with sickle cell disorder (SCD). Presently, blood is matched manually for transfusion using incomplete red cell blood type information, to minimize the immunological incompatibility between donor and patient. We are investigating alternative approaches to blood allocation that exploit extended blood type information measured by a new genetic test shortly to be introduced by the National Health Service in England. We formulate sequential allocation decisions as a Markov decision process and study penalty-based policies for matching, which consider up to 17 blood group antigens, including policies that look ahead to future patient appointments. We tune the policy parameters of a matching rule to minimize formation of antibodies (alloimmunization) in SCD patients and estimate that a 98% reduction in alloimmunization can be achieved compared to current policies. Finally, we show that the tuned policy parameters are robust to major supply shocks. pdfModel-Based Q-Learning with Monotone Policies for Personalized Management of Hypertension Wesley Marrero and Lan Yi (Dartmouth College) Abstract AbstractHypertension is a crucial controllable risk factor of atherosclerotic cardiovascular disease, a leading cause of death in the United States. While traditional analytic techniques may capture the complexities of hypertension treatment planning, they generally provide unintuitive treatment recommendations. This paper aims to advance the acceptance of analytic techniques in clinical practice by presenting a method to obtain interpretable treatment plans. To this end, we introduce the monotone Q-learning algorithm, which guarantees policies are nondecreasing on patients' health severity by limiting the exploration of treatment choices and solving simple integer programs. We represent a set of clinically representative patient profiles through Markov decision process models and compare the performance of our approximately optimal monotone policies with the optimal policy, optimal monotone policy, and current clinical guidelines. The approximately optimal monotone policies outperform the current clinical guidelines while displaying small losses in quality-adjusted life years compared to the optimal policy. pdf Healthcare and Life SciencesEconomic Analyses & Behavioral Science-Based Simulation Chair: Shoaib Mohd (Indian Institute of Technology Delhi)
Estimating and Projecting the Economic Impact of Antiretroviral Therapy on the us Economy through an Updated Hiv Microsimulation Model Haluk Damgacioglu, Kalyani Sonawane, Poria Dorali, and Ashish A. Deshmukh (Medical University of South Carolina) Tags: data driven, discrete-event, healthcare, R Abstract AbstractThis paper proposes a comprehensively calibrated HIV simulation model, validated against HIV prevalence, related mortality, and viral load suppression rates. The model is designed for epidemiological forecasting and policy analysis. Using this model, we estimated the economic impact of antiretroviral therapy (ART) in the U.S. The model tracks the progression of the HIV epidemic on an individual basis, considering factors such as sex, age, transmission risk, and treatment adherence to project HIV prevalence and treatment statistics through 2040. It predicts an increase in the U.S. HIV population from 1.20 million in 2022 to 1.24 million by 2030, with a subsequent decrease to 1.21 million by 2040, reflecting demographic shifts and enhancements in ART access and effectiveness. Economically, the model predicts a significant rise in financial burden, with costs increasing from 38 billion US dollars in 2023 to 60 billion US dollars by 2040. pdfTowards Using Simulation to Evaluate the Circular Economy of Small Medical Devices Mohd Shoaib and Antuela Tako (Nottingham Trent University) and Ramzi Fayad and Armando Vargas-Palacios (University of Leeds) Abstract AbstractA linear economy (LE)-based take-make-use-waste model is environmentally unsustainable and other care settings for medical devices. A significant proportion of UK’s National Health Service (NHS) emissions is derived from medical devices. The alternative, a circular economy (CE) system, has the potential to mitigate the environmental impacts associated with LE. This paper evaluates the effect of introducing CE in the healthcare supply chain in the context of small medical devices (SMDs). We develop simulation models that quantify the environmental, operational, and financial impact of circular interventions in the value chain of SMDs. Two complementary simulation models that evaluate the impact of CE for a surgical instrument, laparoscopic scissors, as an example, representing the whole-system perspective and cost-effectiveness from the hospital setting perspective, are presented. Our preliminary findings suggest that the introduction of CE leads to reduced overall environmental emissions, and to improved cost-effectiveness from a hospital setting perspective. pdfIncorporating Face Mask Usage in Agent-based Models Using Personal Beliefs and Perceptions: an Application of the Health Belief Model Sebastian Rodriguez-Cartes, Maria Mayorga, Osman Özaltın, and Julie Swann (North Carolina State University) Abstract AbstractThe modelling of human behavior is a critical component of any simulation tool that aims to represent the
spread of an infectious disease throughout a population. However, few modeling approaches attempt to
incorporate protective behaviors using models grounded in theories from the behavioral sciences. Here, we
demonstrate how to incorporate human behavior accounting for personal beliefs and perceptions by using
a commonly known behavioral framework. We implemented the proposed model within an agent-based
simulation to drive the agent’s decision related to wearing a face mask. We used survey data to characterize
a synthetic population, and investigate the effect of policies that aim to modify beliefs with the goal of
promoting face mask usage. Our results highlight the importance of incorporating the individual drivers
of behavior to better represent adoption of protective actions against health threats, enhancing the ability
of simulation tools to quantify the impact of policy interventions. pdf Healthcare and Life SciencesAgent-Based Modeling in Healthcare - II Chair: Hannah Smalley (Georgia Institute of Technology)
Using Simulation Modeling to Evaluate the Impact of Proactive Tethering Methodologies on Guinea Worm Infections among Dogs in Chad Hannah Smalley and Pinar Keskinocak (Georgia Institute of Technology); Julie Swann (North Carolina State University); and Maryann Delea, Obiora Eneanya, and Adam Weiss (The Carter Center) Abstract AbstractThe detection of Guinea worm, or dracunculiasis, infections in animals, particularly dogs in Chad, has created challenges for related eradication efforts globally. Proactive tethering is a recent intervention employed to contain dogs and minimize potential exposure to water sources. Approximately 84% of 40,962 eligible dogs were reportedly tethered in Chad during 2023. However, household adherence to tethering intervention guidelines is not always uniform or ideal, with some dogs released from tethering at night and others tethered only intermittently. We adapt an agent-based simulation model to analyze various proactive tethering scenarios. Selecting dogs for tethering randomly each day results in up to 7 times fewer infections over time than tethering a fixed selection of dogs. Releasing dogs from tethering for part of the day results in up to 25 times more infections compared to full-day tethering. Understanding the impacts of proactive tethering practices can inform implementation strategies for interventions. pdfQuantifying the Impact of Vaccinating Under-Immunized Groups in Polio Outbreaks: A Simulation-Based Study Yuming Sun, Hongyu Xue, Pinar Keskinocak, and Lauren Steimle (Georgia Institute of Technology) Abstract AbstractPolio, an infectious disease that causes paralysis, remains a global health concern, especially with the emergence of circulating vaccine-derived poliovirus (cVDPV) and recurring outbreaks in areas with cohorts of under-immunized individuals. This study assessed how the allocation of vaccines during a polio outbreak response might impact outcomes. Adapting a compartmental simulation model, we projected poliovirus transmission from 2024 to 2026 under different levels of vaccination campaign coverage (i.e., the proportion of the target population reached by vaccination), vaccine allocation schemes (e.g., across different immunity groups), and vaccination campaign delays. Results highlighted that compared to other allocation schemes, priority allocation of vaccines to under-immunized groups (i) significantly reduced the number of paralytic cases, even with lower coverage and longer delay; (ii) achieved die-out of transmission with two rounds of vaccination if the delay was short (<= 3 weeks) and coverage was high (>= 70%). pdfA Standardized Framework for Modeling Non-Pharmaceutical Interventions in Individual-Based Infectious Disease Simulations Johannes Ponge, Janik Suer, Bernd Hellingrath, and André Karch (University of Münster) Tags: agent-based, conceptual modeling, complex systems, COVID, healthcare Abstract AbstractIndividual-based infectious disease simulations play a fundamental role in the evaluation of intervention strategies before implementing them as public policy during emerging epidemics. While public health offices provide a vast array of potential measures in their preparedness plans, their mode of operation in practice is conditioned by a plethora of regional legal, economic, and demographic factors. This work introduces the Trigger-Strategy-Measure (TriSM) formalization as our main contribution and its implementation in the German Epidemic Microsimulation System (GEMS). TriSM is a standardized formalization for modeling complex interventions in individual-based infectious disease simulations, focusing on granularity, extensibility, expressiveness, and usability. We demonstrate TriSM’s capabilities in six simulation case studies where we apply nuanced intervention strategies to a COVID-19-like outbreak scenario and evaluate their effectiveness. Our work contributes to the ongoing efforts of increasing fidelity and expatiating implicit assumptions in individual-based infectious disease models through a standardized formalization. pdf
Track Coordinator - Hybrid Simulation: Anastasia Anagnostou (Brunel University London), Masoud Fakhimi (University of Surrey), Mohd Shoaib (Loughborough University), Antuela Tako (Nottingham Trent University) Hybrid SimulationHybrid Simulation Methods I Chair: Varun Ramamohan (Indian Institute of Technology Delhi, Department of Mechanical Engineering)
Metamodel of a Simulation Model of Colorectal Cancer with Diverse Clinic Populations and Intervention Scenarios Ashley Stanfield and Maria Mayorga (North Carolina State University) and Meghan O'Leary and Kristen Hassmiller Lich (University of North Carolina at Chapel Hill) Abstract AbstractColorectal cancer (CRC) prevention is dependent on increasing screening rates, a strategy proven effective in reducing cancer cases and potential life years lost. Simulation models of CRC can be used to project expected outcomes associated with different evidence-based interventions. However, traditional simulation for each population of interest is computationally intensive and requires a model expert. To address this, we proposed a metamodeling approach, considering various techniques such as linear regression and random forest. By creating a metamodel of the simulation, decision makers can generate both individual and population-level estimates directly and instantaneously. We aimed to create a metamodel of an existing CRC simulation model that can be adapted for different interventions and populations to predict cancer cases averted and life years lost. pdfA Hybrid Approach Combining Simulation and a Queueing Model for Optimizing a Biomanufacturing System Danielle Morey (University of Washington), Giulia Pedrielli (Arizona State University), and Zelda Zabinsky (University of Washington) Abstract AbstractWe explore a hybrid approach to designing a biomanufacturing system with low-volume, high variability, and individualized products. Simulating a large number of possible configurations to determine those that meet target production goals is computationally impractical. We create an explainable surrogate model, specifically a queueing network model, that is calibrated to the output of a few computationally expensive simulations. The queueing network model enables a quick exploration of large numbers of mixed integer-continuous configurations, which would be challenging for traditional surrogate based approaches. The queueing network model is used to quickly identify promising regions where a few configurations can then be evaluated with the simulation. The difference in evaluations at these configurations is used to decide whether the queueing model requires partitioning and/or re-calibration. The use of this hybrid approach with an explainable surrogate enables analysis, such as identifying bottle-necks, and gives insight into robust designs of the biomanufacturing system. pdfA Novel Approach for Outcomes Estimation in Hybrid Simulation Models of Disease Transmission and Progression Soham Das, Aparna Venkataraman, and Varun Ramamohan (Indian Institute of Technology Delhi) Abstract AbstractIn this study, we consider hybrid simulations consisting of an agent-based simulation (ABS) to model disease transmission in a population, and a discrete-time Markov chain executed as a Monte Carlo simulation to model the heterogeneous progression of the disease in infected agents. In such scenarios, execution of the ABS is stopped at a certain time point. At this point, disease-related outcomes for infected agents are estimated by executing the disease progression Monte Carlo simulation for each infected agent over their lifespans, well beyond the execution horizon of the ABS. This can incur substantial computational expense. We present a novel method to alleviate this computational burden by randomly sampling and allocating disease-related outcomes from a repository of outcomes generated and stored as a one-time exercise prior to execution of the hybrid simulation. We demonstrate the effectiveness of our approach via a stylized hybrid simulation of a hypothetical infectious disease transmission scenario. pdf Hybrid SimulationInnovative Hybrid Simulation Chair: Alison Harper (University of Exeter, The Business School)
A Maturity Model for Digital Twins in Healthcare Navonil Mustafee and Alison Harper (University of Exeter); Joe Viana (Norwegian University of Science and Technology, St. Olav’s Hospital); and Tom Monks (University of Exeter) Abstract AbstractDigital models, digital shadows, and digital twins (DTs) are increasingly used in manufacturing/Industry 4.0 to represent levels of integration between physical systems and their digital counterparts; data-flow mechanisms are the enablers of such integration. Healthcare operations management has also witnessed rising interest in hybrid models that use real-time data to increase situational awareness (SA) and enable short-term decision-making. In M&S literature, such models are referred to as Real-time Simulations (RtS) and DTs. Healthcare organizations can realize a heightened state of SA by transitioning from conventional modeling to RtS/DTs. The paper presents a Maturity Model for DTs to contextualize the increasing levels of healthcare Information Systems/Information Technology (IS/IT) integration with real-time models that such a shift will necessitate. The higher the Maturity Level of IS/IT integration, the greater the opportunity to develop modeling artifacts that realize the potential of real-time data and enable organizations to attain higher levels of SA. pdfDeploying Reusable Healthcare Simulation Models in Python Alison Harper, Thomas Monks, and Navonil Mustafee (University of Exeter) Abstract AbstractDiscrete-event simulation (DES) models for healthcare service planning are time-consuming to develop for both modellers and healthcare stakeholders. Model reuse is seen as a potential solution to reduce duplication of effort and maximise the potential value gained from the model. One approach to model reuse is to deploy a simulation model for the same purpose in a single application area, which can be used for planning by healthcare staff such as managers, clinicians or analysts. A model deployed for reuse by healthcare stakeholders needs to be shared and accessible. However, Python can present accessibility and usability challenges for non-technical users. In this paper we investigate some of the advantages and disadvantages to several methods of deploying interactive DES models to non-technical healthcare users. Combining DES with methods and tools from computer science and software engineering, these hybrid models aim to increase the usability, functionality, and accessibility of simulation models. pdf Agent-Based Simulation, Healthcare and Life Sciences, Hybrid SimulationCross-Track Session 1: Applications Chair: Burla Ondes (Purdue University)
Enhancing Forced Displacement Simulations: Integrating Health Facilities for Automatically Generated Routes Networks Alireza Jahani, Maziar Ghorbani, Diana Suleimenova, Yani Xue, and Derek Groen (Brunel University London) Tags: agent-based, optimization, healthcare, logistics, transportation Abstract AbstractThis paper introduces a novel approach to supporting healthcare accessibility for refugees during their movement to camps in regions with limited infrastructure. We achieve this by integrating the density of healthcare facilities into route networks created by customized pruning algorithms. Through rigorous data analysis and algorithm development, our research aims to optimize healthcare delivery routes and enhance healthcare outcomes for displaced populations. Our findings highlight Visit Tracking route pruning as the most effective method, with an Averaged Relative Difference (ARD) of 0.3837. Particularly in scenarios involving healthcare facility integration, this method outperforms others, including the manual extracted route network (0.4902), Direct Distance pruning (0.3912), Triangle pruning (0.3913), and Sequential Distance pruning (0.7846). Despite the inherent limitations of our proposed method, such as data availability and computational complexity, these quantifiable results underscore its potential contributions to healthcare planning, policy development, and humanitarian assistance efforts worldwide. pdfPatient Assignment and Prioritization for Multi-Stage Care with Reentrance Wei Liu, Mengshi Lu, and Pengyi Shi (Purdue University) Tags: discrete-event, healthcare, Matlab Abstract AbstractIn this paper, we study a queueing model that incorporates patient reentrance to reflect patients' recurring requests for nurse care and their rest periods between these requests. Within this framework, we address two levels of decision-making: the priority discipline decision for each nurse and the nurse-patient assignment problem. We introduce the shortest-first and longest-first rules in the priority discipline decision problem and show the condition under which each policy excels through theoretical analysis and comprehensive simulations. For the nurse-patient assignment problem, we propose two heuristic policies. We show that the policy maximizing the immediate decrease in holding costs outperforms the alternative policy, which considers the long-term aggregate holding cost. Additionally, both proposed policies significantly surpass the benchmark policy, which does not utilize queue length information. pdfSimulation-based Optimization for Large-scale Perishable Agri-food Cold Chain in Rwanda: Agent-based Modeling Approach Aghdas Badiee, Adam Gripton, and Philip Greening (Heriot-Watt University) and Toby Peters (University of Birmingham) Tags: agent-based, optimization, supply chain, AnyLogic, Python Abstract AbstractThe global food supply chain faces significant challenges in maintaining the quality and safety of perishable agri-food products. This study introduces a novel approach to demonstrate the efficiency of using the perishable agri-food cold supply chain (FCC) by integrating optimization techniques and agent-based modeling (ABM) simulation. Addressing complexities and challenges such as precise temperature control, emission reduction, waste minimization, and finding the best implementation of cold chain infrastructure, the research applies ABM to model dynamic interactions within the FCC. By testing thousands of simulation scenarios in AnyLogic, the paper demonstrates how the proposed model can support strategic decision-making, demonstrate potential export levels, assess crop quality over time, and evaluate waste reduction compared to non-cold chain scenarios. The research further discusses the implementation of the proposed model in a real case study in Rwanda, Africa, showcasing its contribution to optimizing configuration, reducing food loss and CO2 emissions. pdf Hybrid SimulationHybrid Simulation in Manufacturing and Logistics Chair: Anastasia Anagnostou (Brunel University London)
Multi-method Modeling and Simulation of A Vertical Lift Module with an Integrated Buffer System Using Anylogic Noe Tavira, Jr.; Abhimanyu Sharotry; and Jesus A. Jimenez (Texas State University) and Jakob Marolt and Tone Lerher (University of Maribor) Tags: agent-based, discrete-event, hybrid, AnyLogic, student Abstract AbstractAs industry trend continues to accelerate sellers to begin transitioning the sale of their products exclusively through e-commerce platforms, companies must remain vigilant and recognize the requirement for their products to be safely stored and quickly retrieved. This research presents a comprehensive model and simulation study of a Vertical Lift Module (VLM) with an integrated shuttle-based storage and retrieval system (SBS/RS) or buffer system. This work evaluates a proposed solution to the ever-increasing emergent storage and retrieval challenges faced by warehouses worldwide. The VLM system was modeled using AnyLogic software to evaluate system capacity, travel distance, velocity profiles, and other user-defined operational constraints. The VLM performance is modeled under various conditions and compared to the performance of a traditional stand-alone VLM in terms of throughput and cycle-time to identify potential VLM-Buffer system integration drawbacks or limitations. pdfPlanning a Material Replenishment Through Autonomous Mobile Robot in an Assembly Plant Using a Hybrid Simulation Approach Rupesh Bade Shrestha, Ellie Hungerford, and Konstantinos Mykoniatis (Auburn University) Tags: agent-based, discrete-event, hybrid, logistics, AnyLogic Abstract AbstractThis paper focuses on planning the implementation of an autonomous mobile robot in an existing material replenishment system of an assembly plant using a hybrid simulation approach. In this research, we aim to identify proper strategies for the number of containers or payloads the robot should carry per shift under different scenarios using Agent-Based Modeling and Discrete Event Simulation. Our primary objective is to minimize the number of shifts a mobile robot should travel for replenishment while keeping idle time low across all stations and ensuring timely material replenishment. The results show that choosing which strategy to use for the implementation of the autonomous navigating robot depends on the maximum number of containers it can carry and the utilization of payload space. While this paper focuses on Tiger Motors Assembly line at Auburn University, its applications could be extended to similar assembly plants equipped with a similar material replenishment system. pdfSupply Chain Resilience Optimization with Agent-Based Modeling (SCROAM): A Novel Hybrid Framework Anastasia Anagnostou, Kate Mintram, and Simon J. E. Taylor (Brunel University London) Abstract AbstractSupply chains are vulnerable to an array of exogenous disruptions, including operational contingencies, natural disasters, terrorism, and political and geopolitical instability. In order to ensure resilience to these disruptions, supply chains can use mitigation strategies to minimize risk and maximize recovery. Modeling approaches can be utilized to determine the most appropriate mitigation strategies for a specific scenario; however, there is currently no recognized modeling framework which can be applied to all supply chain sectors. This paper describes the key disruption risks to supply chains; the resilience and optimization strategies and performance metrics employed by supply chains to mitigate these risks; and the applications of simulation modeling in supply chain management. We present a hybrid framework for using agent-based modeling, alongside early warning systems, many objective optimization and option awareness analysis, to manage exogenous risks for a non-specific supply chain. pdf Hybrid SimulationHybrid Simulation in Healthcare Chair: Siddharth Abrol Neena (University of Florida)
Assessing the Impact of Physicians' Behavior Variability on Performance Indicators in Emergency Departments: an Agent-Based Model Miguel Baigorri, Marta Cildoz, and Fermín Mallor (Public University of Navarre) Tags: agent-based, discrete-event, healthcare, hybrid Abstract AbstractIn emergency departments (EDs), traditional simulation models often overlook the variability in physician practice, assuming uniform service provision. Our study introduces a hybrid agent-based discrete-event simulation (AB-DES) model to capture this variability. Through simulation scenarios based on real ED data, we assess the impact of physician behavior on key performance indicators such as patient waiting times and physician stress levels. Results show significant variability in both individual physician performance and average metrics across scenarios. By integrating physician agent modeling, informed by literature from medical and workplace psychology, our approach offers a more nuanced representation of ED dynamics. This model serves as a foundation for future developments towards digital twins, facilitating real-time ED management. Our findings emphasize the importance of considering physician behavior for accurate performance assessment and optimization. pdfA Hybrid Simulation Approach for Modeling Critical Care Delivery in ICU Xiang Zhong, Siddharth Vipankumar Abrol Neena, and Grace Yao Hou (University of Florida) and Yue Dong, Amos Lal, and Ognjen Gajic (Mayo Clinic) Tags: agent-based, discrete-event, healthcare, hybrid, AnyLogic Abstract AbstractCritical care delivery entails a complex human-centric system. Patients and a multidisciplinary care team are the major autonomous agents in the system. Their actions and interactions with each other and the environment drive the dynamic evolution of the system and determine the system outcomes (e.g., patient outcome, provider burnout, care quality, system efficiency). The objective of this study was to model critical care delivery in an ICU to provide decision support to ICU resource management. A hybrid simulation approach combining agent-based simulation for modeling patients and care providers, and discrete event simulation for modeling care pathways was developed. This approach leverages clinical knowledge for modeling individual patient trajectories and care services endogenized from patient needs. It allows us to understand how the arrival flow of patients, the patient disease condition, and the care protocols jointly affect ICU census and provider workload, and build a pathway towards digital twinning of ICUs. pdf Hybrid SimulationPanel: Ten Years of the Hybrid Simulation Track: Reflections and Vision for the Future Chair: Antuela Tako (Nottingham Trent University)
Ten Years of the Hybrid Simulation Track: Reflections and Vision for the Future Anastasia Anagnostou (Brunel University London), Sally Brailsford (University of Southampton), Tillal Eldabi (University of Bradford), Navonil Mustafee (University of Exeter), and Antuela Tako (Nottingham Trent University) Abstract AbstractThe Hybrid Simulation (HS) track was included in the Winter Simulation Conference (WSC) proceedings as a full conference track for the first time in 2014. A decade has passed since that inaugural track, and HS research and practice has seen impressive advancements during this time. This paper, based on a high-level review of the published works in the last ten years of the HS track, reflects on its successes and challenges and sets the scene for the future of the field. The paper is authored by the HS track organizers, both past and present, who report on the track’s history, the nature of HS applications, the modeling tools and software available, as well as implementation challenges and the users' perspective. Finally, the paper discusses the future of HS. pdf Hybrid SimulationHybrid Simulation Methods II Chair: Katie Mintram (Brunel University London)
Testing Methodology For DEVS Models In Cadmium Curtis Winstanley and Gabriel Wainer (Carleton University) Tags: DEVS, validation, verification, C++, student Abstract AbstractThe practice of testing in modeling and simulation software development can be a very lengthy and tedious process but is arguably one of the most important phases in the software development lifecycle. As the complexity of a simulation model increases, so does the amount of testing to thoroughly verify and validate and to achieve adequate quality assurance of the software. This paper introduces a testing framework that is used to assist in proving the validity of DEVS atomic models in the open-source simulation tool Cadmium. Furthermore, this framework utilizes the ChatGPT Application Programming Interface (API) to help lighten the workload involved in testing those DEVS atomic models. We show how to use the framework using the Cadmium simulator to show the effectiveness of the framework. pdfHybrid Modeling Integrating Artificial Intelligence and Modeling & Simulation Paradigms Andreas Tolk (MITRE Corporation) Abstract AbstractThis paper discusses the complementary relationship between Modeling and Simulation (M&S) and Artificial Intelligence (AI) methods like machine learning. While M&S uses algorithms to model system behavior from input parameters, AI learns patterns from correlation in data. The paper argues that hybrid models combining M&S and AI can be more powerful than either alone. It provides a conceptual framework showing how M&S and AI can be integrated in sequential, parallel, complementary or competitive configurations. Several example applications are given where AI enhances M&S and vice versa, such as using AI to optimize simulation parameters, generate synthetic training data for AI from simulations, interpret AI model behavior through simulation, and automate aspects of simulation development with AI assistance. The potential benefits of hybrid AI/M&S modeling span improved accuracy, efficiency, trustworthiness and cross-disciplinary collaboration. The paper calls for further research developing a solid theoretical foundation for merging these complementary paradigms. pdf
Track Coordinator - Introductory Tutorials: Canan Gunes Corlu (Boston University), Chang-Han Rhee (Northwestern University) Introductory TutorialsAn Introduction to Digital Twins Chair: Andrea Ferrari (Politecnico di Torino)
An Introduction to Digital Twins Andrea Matta (Politecnico di Milano) and Giovanni Lugaresi (KU Leuven) Abstract AbstractThis work presents an introduction to digital twins, digital versions of real objects designed to aid in analysis, enhancements, and decision-making. Its main goal is to introduce what digital twins are, highlighting their key characteristics, their role in supporting real-world counterparts, and the models they employ. pdf Introductory TutorialsTwenty-Three Critical Pitfalls in Simulation Modeling and How to Avoid Them Chair: Giovanni Lugaresi (KU Leuven)
Twenty-Three Critical Pitfalls in Simulation Modeling and How to Avoid Them Averill M. Law (Averill M. Law & Associates, Inc.) Abstract AbstractMany simulation projects are less than successful because “analysts” view simulation modeling as a complicated exercise in computer programming. This is probably caused by their education being limited to vendor training or an undergraduate simulation course that focuses on how to use a particular simulation-software package. Unsuccessful projects also result from lack of real-world experience in performing simulation studies. In this tutorial we discuss 23 critical pitfalls that can cause a simulation project to result in failure. These pitfalls fall into four categories: (1) modeling and validation, (2) simulation software, (3) modeling system randomness, and (4) design and analysis of simulation experiments. pdf Introductory TutorialsSimulation Optimization: An Introductory Tutorial on Methodology Chair: David J. Eckman (Texas A&M University)
Simulation Optimization: An Introductory Tutorial on Methodology Sara Shashaani (North Carolina State University) Abstract AbstractWith an upward trend for use in real-world problems of high uncertainty, the field of simulation optimization (SO) is evolving to aid in finding near-optimal solutions more rapidly and reliably. A comprehensive overview of the vast and diverse literature in continuous and discrete SO over large spaces is difficult. This short tutorial intends instead to introduce the methodological landscape, stirring a middle ground between statistical analysis of Monte Carlo sampling and mathematical analysis of numerical optimization. Particular attention to sampling and its impact on SO Algorithms highlights open and promising research directions. pdf Introductory TutorialsAn Introductory Tutorial for the Kotlin Simulation Library Chair: Ignacio J. Martinez-Moyano (Argonne National Laboratory, University of Chicago)
An Introductory Tutorial for the Kotlin Simulation Library Manuel Rossetti (University of Arkansas) Tags: discrete-event, education, tutorial, open source, government Abstract AbstractThe Kotlin Simulation Library (KSL) is an open-source library written in the Kotlin programming language that facilitates Monte Carlo and discrete-event simulation modeling. This paper provides a tutorial of the functionality of the discrete-event modeling capabilities provided by the KSL. The library provides an API framework for developing, executing, and analyzing models using both the event view and the process view modeling perspectives. Because models can be developed that contain both modeling perspectives, the KSL provides great flexibility during the modeling building process. This tutorial provides both an overview of the library and also a complete example including its analysis using KSL constructs. pdf Introductory TutorialsIntroductory Tutorial: Simulation Optimization under Input Uncertainty Chair: Wei Xie (Northeastern University)
Introductory Tutorial: Simulation Optimization under Input Uncertainty Linyun He and Eunhye Song (Georgia Institute of Technology) Abstract AbstractInput uncertainty in the simulation output is caused by the estimation error in the input models of the simulator due to finiteness of the data from which they are estimated. Ignoring input uncertainty when formulating and solving a simulation optimization problem may lead to a solution with poor system performance. This tutorial discusses how to incorporate input uncertainty in simulation optimization to avoid such risk. We first categorize the problems into three groups based on their contexts: fixed batch data, streaming data, and active input data collection problems. Input and simulation output response modeling frameworks that can be adopted in all three categories are discussed. Then, we provide a high-level overview of simulation optimization problem formulations and algorithmic approaches to tackle problems in each group. Some thoughts on future research directions are shared. pdf Introductory TutorialsImportance Sampling for Minimization of Tail Risks: A Tutorial Chair: Chang-Han Rhee (Northwestern University)
Importance Sampling for Minimization of Tail Risks: A Tutorial Anand Deo (Indian Institute of Management Bangalore) and Karthyek Murthy (Singapore University of Technology and Design) Abstract AbstractThis paper provides an introductory overview of how one may employ importance sampling effectively as a tool for solving stochastic optimization formulations incorporating tail risk measures such as Conditional Value-at-Risk. Approximating the tail risk measure by its sample average approximation, while appealing due to its simplicity and universality in use, requires a large number of samples to be able to arrive at risk-minimizing decisions with high confidence. In simulation, Importance Sampling is among the most prominent methods for substantially reducing the sample requirement while estimating probabilities of rare events. Can importance sampling be used for optimization as well? This tutorial aims to provide an introductory overview of the two key ingredients in this regard, namely, (i) how one may arrive at an importance sampling change of measure prescription at every decision, and (ii) the prominent techniques available for integrating such a prescription within a solution paradigm for stochastic optimization formulations pdf Introductory TutorialsSimulation Exploration Experience (SEE) Introductory Tutorial Chair: Cristina Ruiz-Martín (Carleton University)
Simulation Exploration Experience (SEE) Introductory Tutorial Maziar Ghorbani, Anastasia Anagnostou, Nura Tijjani Abubakar, and Hridyanshu Aatreya (Brunel University London); Damon Curry (Simulation Exploration Experience Inc.); and Simon J.E. Taylor (Brunel University London) Abstract AbstractThis paper presents an introductory tutorial based on the Simulation Exploration Experience (SEE) 2024, highlighting a collaborative effort by NASA, SISO and international academic partners to model lunar facilities and habitats through the High-Level Architecture (HLA) for distributed simulations. Focused on federating simulations of lunar infrastructure, this paper outlines methodical steps for creating and executing models that incorporate communication systems and 3D visualizations to support educational and research initiatives in space exploration. Reflecting on SEE 2024’s advancements, the tutorial emphasizes significant progress in using simulation technology to promote innovation and collaboration across various scientific disciplines. This contribution, intended for discussion at the Winter Simulation Conference (WSC) 2024, showcases the role of HLA runtime infrastructure (RTI) in enabling realistic and interoperable simulation environments, enriching the discourse on simulation education. pdf Introductory TutorialsIncreasing Model Transparency in System Dynamics Models Chair: Anastasia Anagnostou (Brunel University London)
Increasing Model Transparency in System Dynamics Models Ignacio J. Martinez-Moyano (Argonne National Laboratory, University of Chicago) Abstract AbstractModels are simplified descriptions of real systems. To be useful, models need to be as transparent as possible so that the data used and all the assumptions about the real system implemented in the model are evident, documented, and readily available for inspection. In this paper, transparency in models—particularly System Dynamics models—is discussed and the use of an automated tool—the SDM-Doc tool—for model documentation and assessment is proposed. The proposed tool is showcased using a simple epidemics model. The use of the tool and the benefits derived from using it during the model building process are presented and explained. pdf
Logistics Supply Chains Transportation Logistics Supply Chains TransportationTraffic Simulation Chair: Bhakti Stephan Onggo (University of Southampton, CORMSIS)
Enhancing Passenger Flow at Subway Transfer Stations through Simulation Modeling Best Contributed Applied Paper - Finalist Dongyang Zhen (University of Maryland, College Park); Zhuoxuan Liu (Central University of Finance and Economics); Yonghao Chen (Columbia University); and Qingbin Cui (University of Maryland, College Park) Tags: agent-based, big data, discrete-event, transportation, student Abstract AbstractWith the increased popularity in public transportation field, congestion in multi-line subway transfer hub becomes one of the hot issues. This study presents a comprehensive simulation model utilizing AnyLogic software to optimize the performance of the 59th-Street Columbus Circle subway transfer station in New York City by addressing challenges of inefficient passenger flow management. Leveraging the NYC Turnstile Usage Data, the simulation focuses on this major three-level transfer hub to analyze passenger flows, identify bottlenecks, and evaluate potential solutions. Extensive simulation experiments reveal critical high-density regions prone to severe congestion, particularly at intersections. With an inverse relationship between crowd density and average passenger speed, higher densities lead to near-stagnant conditions during peak hours. By providing data-driven insights through simulation analyses, this research aims to enhance commuter experience, reduce delays, ensure efficient operations at vital transit hubs, and contribute to more sustainable urban transportation by encouraging public transit usage. pdfEnhancing Driver Behavior Models in Response to Emergency Vehicles Gopikrishnan Nair Suresh Kumar, Michael Hunter, and Angshuman Guin (Georgia Institute of Technology) Tags: agent-based, behavior, transportation, C++, student Abstract AbstractEmergency Vehicle Preemption (EVP) is a traffic operation strategy intended to minimize the travel times of Emergency Vehicles (ERVs) in a network. The ripple effects of a disruptive event such as the entry of an ERV are usually seen over a broad area of the traffic network, under medium to heavy traffic conditions. As traffic densities continue to grow, incorporating a robust preemption system is vital in ensuring prompt emergency responses. Preemption systems are often evaluated under ideal scenarios in a simulation environment, without consideration of the interactions between ERVs and non-ERVs. Our research intends to develop ERV and non-ERV driver models to enable realistic simulation of such interactions. Our findings show large differences in the performance metrics reported by simulation models on standard simulation platforms with and without the incorporation of realistic driver behavior. pdfAdaptive Transit Signal Priority Based on Deep Reinforcement Learning and Connected Vehicles in Traffic Microsimulation Environment Dickness Kakitahi Kwesiga, Guin Angshuman, and Michael P. Hunter (Georgia Institute of Technology) Tags: machine learning, optimization, transportation, student Abstract AbstractModel-free reinforcement learning (RL) provides a potential alternative to earlier formulations of adaptive transit signal priority (TSP) algorithms based on mathematical programming that require complex and nonlinear objective functions. This study extends RL based traffic control to include TSP. Using a microscopic simulation environment and connected vehicle data, the study develops and tests a TSP event-based RL agent that assumes control from another developed RL based general traffic signal controller. The TSP agent assumes control when transit buses enter the dedicated short-range communication (DSRC) zone of the intersection. This agent is shown to reduce the bus travel time by about 21%, with marginal impact to general traffic at a saturation rate of 0.95. The TSP agent also shows slightly better bus travel time compared to actuated signal control with TSP. The architecture of the agent and simulation is selected considering the need to improve simulation run time efficiency. pdf Logistics Supply Chains TransportationWarehouses Chair: Sahil Belsare (Walmart, Inc. USA; Northeastern University)
Reducing Transient Behavior in Simulation-Based Digital Twins: A Novel Initialization Approach for Order Picking Systems Stefan Galka (Ostbayerische Technische Hochschule Regensburg) Tags: discrete-event, digital twin, logistics, Siemens Tecnomatix Plant Simulation, government Abstract AbstractIn the context of facilitating operational decision-making through simulation-based digital twins, precise and expeditious synchronization of simulation models with real-system load states is paramount. Such synchronization serves to attenuate the typical transient behavior observed in material flow simulation, confining it to a brief temporal window. This paper delineates a novel conceptual framework for initializing simulation models, illustrated through an exemplar of an order picking system integrated with SAP Extended Warehouse Management as its warehouse and operation management system. Through empirical inquiry, the ramifications of the proposed initialization framework on the transient behavior of the simulation model are scrutinized. Notably, the reference simulation model commences in a state of 'empty' load. The findings of this study evince that the proposed approach yields a significant improvement in transient behavior. pdfCloud based Simulation Platform (CSP): A Novel Way to Democratize Simulation based Experimentation Rohan Vaidya, Abhineet Mittal, and Ganesh Nanaware (Amazon) Tags: discrete-event, digital twin, logistics, FlexSim, industry Abstract AbstractOrganizations today are integrating technologies such as cloud computing, and digital twin in their manufacturing and logistical processes. In a capital-intensive logistics industry, Discrete event simulation (DES) plays a crucial role in distribution center design, automation system performance analysis, optimization and operational planning. Developing and deploying discrete event simulation models demands proficiency in various simulation software and programming languages, imposing limitations on widespread use of simulation models for experimentation. This paper presents an AWS based cloud simulation platform (CSP) which is a secure and scalable solution for seamless execution and democratization of discrete event simulation. CSP empowers simulation practitioners to seamlessly execute simulation models and perform scientific experiments. The paper also provides details of the CSP architecture consisting of data import/export modules and a simulation integration module. As an example, we present a simulation-based staffing tool deployed through CSP. pdfMetamodel-Based Order Picking for Automated Storage and Retrieval Systems Andrea Ferrari (Politecnico di Torino) and Canan Gunes Corlu (Boston University) Abstract AbstractAutomated warehouses, a key component of modern supply chain processes, have been widely introduced in various industries. Their automation, including sophisticated systems such as automated storage and retrieval systems, plays a crucial role in improving efficiency and reducing operational costs. This paper focuses on the order picking problem in a multi-level shuttle system, which aims at predicting the time required to fulfill a set of picking orders, i.e., the makespan. A metamodel based on a neural network architecture that exploits long short-term memory and linear layers is proposed. The metamodel was trained and tested on synthetic data from a stochastic discrete event simulation model. Extensive experiments illustrate the validity of the metamodel in accurately predicting the makespan. The study not only advances theoretical modeling in the context of automated warehousing, but also outlines future research directions for improved metamodel performance and broader applications, such as stochastic optimization and deadlock prediction. pdf Logistics Supply Chains TransportationPorts and Terminals Chair: Uwe Clausen (Technische Universität Dortmund, Institute of Transport Logistics)
PortalLite Towns: Investigating the Viability and Impact of Distributed Small Ports Network in Enhancing Accessibility and Sustainability Jay Amer (University of Tennessee, Knoxville; N. J. Malin); Xudong Wang and Xueping Li (University of Tennessee, Knoxville); and Yanan Li and Haobin Li (National University of Singapore) Tags: agent-based, optimization, logistics, supply chain, AnyLogic Abstract AbstractThis study delves into the feasibility and efficiency of the PortalLite system, an innovative maritime logistics concept designed to enhance accessibility to remote regions, reduce environmental impacts, and alleviate urban congestion through the use of a small distributed ports network. Through a combination of simulation and mathematical modeling, we evaluate the system's economic viability, operational efficiency, and environmental benefits under various scenarios. Our findings indicate that the PortalLite system can reduce overall logistical costs in urban areas with existing waterways under certain conditions. Furthermore, the analysis reveals that despite initial investment costs, the PortalLite system's long-term economic benefits can be significant, with a return on investment achievable within a relatively short period under optimal conditions. This research highlights the potential of the PortalLite system to provide sustainable, cost-efficient, and accessible transportation solutions, especially in areas currently underserved by traditional logistics models. pdfScenario based V&V of Automated Driving Vessels Arnold Hermann Akkermann (German Aerospace Center (DLR)) Tags: optimization, validation, verification Abstract AbstractThe main objective of this research is to develop a simulation-based test track for the verification and validation (V&V) of automated sailing vessels. In order to benefit from the experience of the automotive industry, traffic accident research and V&V methods in this field have been analyzed. In this paper, the effects of traffic separation in maritime traffic are analyzed on the basis of traffic separation in road traffic. Examples of the typical behavior of seagoing vessels are presented based on the analysis and evaluation of historical data collected worldwide in the context of maritime traffic separation. In order to effectively and sustainably use traffic separation systems for V&V, the open test space of the world's oceans is clustered using a methodical approach. This results in a test track for automated driving ships, which is integrated into the simulation and used for V&V through implemented scenarios. pdfAutomatic Model Generation for Discrete Event Simulation of Less-Than-Truckload Terminals Lasse Jurgeleit, Maximilian Mowe, Maximilian Kiefer, Christin Schumacher, and Uwe Clausen (Technische Universität Dortmund) Tags: agent-based, discrete-event, logistics, transportation, AnyLogic Abstract AbstractPlanning and optimizing Less-Than-Truckload (LTL) terminals is challenging, particularly for small and medium-sized enterprises (SMEs) lacking simulation expertise. Despite simulation-based approaches’ effectiveness, the required financial investments often prohibit SMEs from utilizing them. This paper introduces a tool combining automatic model generation and generic modeling for discrete event simulation in LTL terminal planning for all terminal shapes. Specifically designed to meet SMEs’ needs, the tool generates simulation models customized to individual terminal requirements through user input, facilitating efficient layout planning and resource allocation. The approach ensures that SMEs can benefit from advanced planning techniques without substantial financial investments or specialized knowledge, thus fostering competitiveness and sustainability within the LTL sector. Validation demonstrates that the automatic model generation tool yields results comparable to manually built simulation models regarding the most efficient terminal shapes. pdf Logistics Supply Chains TransportationSimulation and Optimization Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities)
Insights Into Car Sharing Relocation Policies Using A Simulation-Optimization Approach Mahmoud El-Banna and Amani Albdour (German Jordanian University) Tags: discrete-event, optimization, logistics, Arena Abstract AbstractOne-way carsharing allows customers to pick up a vehicle from one location and drop it off at another one. While this approach is gaining acceptance over two-way or free-floating carsharing for small populations, it suffers from vehicle imbalances: excess vehicles at some stations and shortages at others. Proper inves¬tigations are necessary to minimize these imbalances. This paper compares two relocation policies in a Jordanian pilot case study using discrete event simulation: user-based (adjusting service prices to influence demand) and staff-based (hiring external resources). Results show that the staff-based policy outperformed the user-based policy by 55.4 % in vehicle utilization and by 3.4 % in cycle service level. However, the user-based policy achieved higher overall gains. pdfDesigning the Charging Stations Network for Freight Delivery by Drones Using Simulation-Optimization Irene Izco, Adrian Serrano-Hernandez, and Javier Faulin (Public University of Navarre, Institute of Smart Cities) Abstract AbstractDrones are a promising choice to the last-mile delivery in place of conventional vehicles that contribute to road congestion and air pollution. Despite the autonomy, flexibility, and agility of drones, their limited battery capacity and payload compromises their flight range. In this paper, this challenge is faced through the placement of charging stations where drone batteries are recharged to expand their flying span. This work relies on a simulation-optimization solution approach to determine the optimal number of the drone hubs to serve a given delivery order demand. The optimization aims at minimizing charging station installation costs, drone energy consumption, and operational costs. The simulation model is run to simulate the parcel delivery flights and the drone battery consumption and charging cycles. Moreover, we pinpoint the station locations and size, allocating the customer demands to stations and dimension the drones’ fleet to deliver packages efficiently. pdf Logistics Supply Chains TransportationSimulation to Support Scheduling Chair: Klaus Altendorfer (Upper Austrian University of Applied Science)
Simulation and Optimization-Based Planning of the Use of Tank Containers in the Production of Specialty Chemicals Best Contributed Theoretical Paper - Finalist Maximilian Kiefer, Patrick Buhle, and Uwe Clausen (TU Dortmund University) Tags: agent-based, optimization, logistics, AnyLogic, student Abstract AbstractChanging market conditions in the chemical industry are leading to an increased demand for fast and individually engineered chemicals. This results in a decline in mass production towards producing small, demand-driven quantities. A combination of changing demand and the need for short-term adjustments requires flexible production planning and logistics. To ensure logistics flexibility, primarily manual processes are used. However, this comes with the risk of direct contact between humans and chemicals. One way to avoid this contact and enable sufficient flexibility is to use tank containers directly connected to the production plants. This paper aims to develop a framework that combines simulation and optimization for planning the use of tank containers. Based on this framework, tank container storage materials will be selected using optimization. Furthermore, simulation helps to evaluate the influences of the selection on the logistics system. Here, the focus is on the management of general cargo containers. pdfEvaluating Production Planning and Control Systems in Different Environments: A Comparative Simulation Study Wolfgang Seiringer, Balwin Bokor, and Klaus Altendorfer (University of Applied Sciences Upper Austria) Abstract AbstractSelecting the appropriate production planning and control systems (PPCS) presents a significant challenge for many companies, as their performance, i.e. overall costs, depends on the production system environment. Key environmental characteristics include the system's structure, i.e. flow shop, hybrid shop, or job shop, and the planned shop load. Besides selecting a suitable PPCS, its parameterization significantly influences the performance. This publication investigates the performance and the optimal parametrization of Material Requirement Planning (MRP), Reorder Point System (RPS) and Constant Work In Progress (ConWIP) at different stochastic multi-item multi-stage production system environments by conduction a comprehensive full factorial simulation study. The results indicate that MRP and ConWIP generally outperform RPS in all observed environments. Moreover, when comparing MRP with ConWIP, the performance clearly varies depending on the specific production system environment. pdfCombining Simulation and Recurrent Neural Networks for Model-based Condition Monitoring of Machines Alexander Wuttke and Markus Rabe (TU Dortmund University), Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST), and Jan-Philipp Diepenbrock (IVA Schmetz GmbH) Tags: estimation, machine learning, neural networks, cyber-physical systems Abstract AbstractMaintenance is pivotal in industry, with condition-based maintenance emerging as a key strategy. This involves monitoring the machine condition through sensor data analysis. Model-based approaches compare observed data with expected values from models, which requires high-quality models. An established method is to use simulation models, which in many cases produce good results but may lack precision due to uncertainties. Alternatively, models created by machine learning can detect patterns directly from data. This paper proposes combining simulation models with machine learning models, leveraging the simulation's a-priori knowledge and machine learning's data patterns to enhance models for condition monitoring. Recurrent neural networks are suggested as the machine learning method. The paper outlines a systematic approach and demonstrates its application in an industrial use case, which investigates vacuum processes in industrial furnaces. pdf Logistics Supply Chains TransportationSupply Chains Chair: Karin Westlund (Uppsala University, Skogforsk)
Validation and Quantification of Possible Model Extensions for a Railway Operations Model Using Delay Data Disaggregation Nadine Schwab (Technische Universität Wien, dwh GmbH); Matthias Rößler and Hannah Kastinger (dwh GmbH); Günter Schneckenreither (dwh GmbH, Technische Universität Wien); Matthias Wastian (dwh GmbH); and Niki Popper (Technische Universität Wien, dwh GmbH) Tags: agent-based, data analytics, logistics, supply chain, transportation Abstract AbstractIn simulation of railway traffic, it is crucial to incorporate realistic delays. Delay data can be obtained from historical records containing scheduled and actual times of trains. Labelled or unlabelled with delay causes, these data do not consistently indicate whether a delay is secondary (e.g., caused by another delayed train) or primary (i.e., caused by external or unknown events) for a specific simulation application due to missing, inconsistent, and biased data. In a realistic simulation, only primary delays should be injected, whereas secondary delays emerge from the simulated dynamics. The authors developed an approach to differentiate between primary and secondary delays that is flexible in regard to simulated resources, performing a network analysis and using different thresholds for blocking trains. Finally, the disaggregated primary delays were inserted into a simulation model, showing that it can reproduce the actual total delays with reasonable errors. pdfDisaster Relief Inventory Simulation: Managing Resources in Humanitarian Camps Cem Yarkin Yildiz and Erhun Kundakcioglu (Özyeğin University) Tags: discrete-event, random process generation, supply chain, Java, student Abstract AbstractNatural disasters and conflicts often result in humanitarian crises, necessitating effective inventory management to meet the needs of displaced populations. This paper introduces a sophisticated simulation model tailored for disaster relief scenarios. By integrating real-world dynamics and constraints, such as perishability of goods, budget constraints, and uncertain demand, the model offers a robust framework for decision-makers in humanitarian organizations addressing post-disaster inventory management challenges. The simulation tool is open source, promoting widespread adoption and adaptation, thereby enriching the humanitarian logistics toolbox. Computational experiments are conducted to validate the simulation engine and provide valuable insights. pdfAnalyzing Delivery Performance and Robustness Of Wood Supply Chains Using Simulation-Based Multi-Objective Optimization Karin Westlund (Uppsala University, Skogforsk) and Amos H.C. Ng (Uppsala University) Tags: discrete-event, optimization, logistics, supply chain, Python Abstract AbstractThe wood supply chain is complex, involving numerous stakeholders, processes, and logistical challenges to ensure the timely and accurate delivery of wood products to customers. Variation in road accessibility caused by weather further compounds operational complexity.
This paper delves into the challenges faced by forestry managers and explores how simulation and optimization techniques can address these challenges. By integrating simulation with multi-objective optimization algorithms, this research aims to optimize harvest scheduling, addressing multiple conflicting objectives including maximizing service level and throughput, while minimizing lead time and delivery deviation measured as a loss function. The findings underscore the potential of such a simulation-based multi-objective optimization approach to enhance both delivery performance and robustness in wood supply chains, providing valuable insights for decision-making. Ultimately, this research contributes to advancing the understanding of how simulation and optimization techniques can bolster the efficiency and resilience of the forestry industry to face evolving challenges. pdf
Manufacturing and Industry 4.0 Track Coordinator - Manufacturing and Industry 4.0: Alp Akcay (Northeastern University), Christoph Laroque (University of Applied Sciences Zwickau), Guodong Shao (National Institute of Standards and Technology) Manufacturing and Industry 4.0Digital Twins for Manufacturing Chair: Kenneth J. Braakman (NXP Semiconductors N.V., Eindhoven University of Technology)
Simulation Aspects of a Generic Digital Twin Ecosystem for Computer Numerical Control Manufacturing Processes Minas Pantelidakis and Konstantinos Mykoniatis (Auburn University) Tags: data driven, cyber-physical systems, digital twin, manufacturing, student Abstract AbstractThis work considers a generic Digital Twin Ecosystem (DTE) for computer numerical control manufacturing processes. The DTE aims to accurately model physical systems' behavior in 3D virtual space, improve stakeholder decision-making based on real-time analytics, and generate insight for autonomous process intervention. The DTE uses the Unity real-time development platform to integrate online simulation for process replication, remote monitoring and management of the manufacturing process, and offline simulation for "what-if" analysis and exploration of process variables under various operational scenarios. Online and offline simulation could be combined to enhance twinning with predictive capabilities for timely autonomous intervention. In this work, we describe the architecture of the generic DTE, detail its components, and demonstrate its universal applicability through two case studies in Additive and Subtractive Manufacturing. We also discuss simulation aspects of the DTE, focusing on process replication and process lookahead. pdfEnhancing Digital Twins with Deep Reinforcement Learning: A Use Case in Maintenance Prioritization Siyuan Chen (Chalmers University of Technology); Paulo Victor Lopes (Aeronautics Institute of Technology, Federal University of Sao Paulo); Silvan Marti and Mohan Rajashekarappa (Chalmers University of Technology); Sunith Bandaru (University of Skövde); Christina Windmark (Lund University); and Jon Bokrantz and Anders Skoogh (Chalmers University of Technology) Abstract AbstractThis paper introduces an innovative framework that enhances digital twins with deep reinforcement learning (DRL) to support maintenance in manufacturing systems. Utilizing a sophisticated artificial intelligence (AI) layer, this framework integrates real-time and historical production data from a physical manufacturing system to a digital twin, enabling dynamic simulation and analysis. Maintenance decisions are informed by DRL algorithms that analyze this data, facilitating smart maintenance strategies that adaptively prioritize tasks based on predictive analytics. The effectiveness of this approach is demonstrated through a case study in a lab-scale drone factory, where maintenance tasks are prioritized using a proximal policy optimization. This integration not only refines maintenance decisions but also aligns with the broader goals of operational efficiency and sustainability in Industry 4.0. Our results highlight the potential of combining DRL with digital twins to significantly enhance decision-making in industrial maintenance, offering a novel approach to predictive and prescriptive maintenance practices. pdfThe Economic Impact of Digital Twin Technology on Manufacturing Systems Ali Ahmad Malik (Oakland University) Abstract AbstractDigital Twin technology is rapidly advancing, offering a virtual representation that models and connects complex physical systems for diverse purposes like simulation, monitoring, maintenance, and optimization. In contrast to traditional approaches limited to simulating specific physical processes, simulations within a digital twin comprise polymorphic environments that accurately depict large-scale systems. These digital twins remain connected to their physical counterparts through real-time data and feedback loops. While the benefits of implementing digital twins are numerous, the resources, effort, and investment required can vary for each use case. Manufacturers often must assess the return on investment (ROI) before committing to these initiatives. However, digital twins' intricate, multidimensional nature poses challenges in accurately evaluating their ROIs. This research assesses the economic impact of developing digital twins for manufacturing systems and presents a practical framework for return on investment. This systematic approach can help stakeholders enhance the financial viability of digital twin projects. pdf Manufacturing and Industry 4.0Intralogistics Chair: Young Jae Jang (Korea Advanced Institute of Science and Technology, Daim Research)
Predicitve Decision Models for an Energy Efficient Operation of Stacker Cranes in a High-bay Warehouse Rico Zoellner, Konrad Handrich, Frank Schulze, and Thorsten Schmidt (Technical University Dresden) Abstract AbstractThis paper presents a simulation-based optimization framework to find and evaluate optimal storage plans in a high-bay warehouse. The two objectives under consideration are reducing the total energy consumption and maximizing the recuperation of energy of stacker cranes. These cranes operating in different aisles are connected by an internal circuit. One part of the framework simulates the power flows both on a single stacker crane and in the internal circuit. Based on that, the optimization part computes energy optimal trajectories for prescribed movements using a variational approach. By dividing the trajectories into time intervals, a MIP model delivers double-cycle plans. For longer down movements the shape of optimal
trajectories would be technically disadvantageous which requires alternatives to avoid the abrasion of the devices. pdfManufacturing Intralogistics Concepts for a Battery Assembly Line Bilgenur Erdogan, Quang-Vinh Dang, Mehrdad Mohammadi, and Ivo Adan (Eindhoven University of Technology) Abstract AbstractThis paper designs intralogistics concepts for an electrical battery pack production setup inspired by our industry partner, featuring automated and manual workstations. In accordance with the principles of Industry 4.0, autonomous mobile robots (AMRs) and automated guided vehicles (AGVs) are employed to carry out material supply and product transportation, respectively. The complexity rises due to the interdependencies among production, material supply, and product transportation. The proposed intralogistics concepts aim to optimize throughput while utilizing AGVs and AMRs efficiently. Using real-world data from the industry partner, the efficacy of these concepts is shown via simulation across varying fleet sizes and product types, with extendable implications for high-mix-low-volume production and autonomous manufacturing systems. pdfComponent-based Synthesis of Structural Variants of Simulation Models for Changeable Material Flow Systems Jan Winkels (TU Dortmund University); Felix Özkul and Robin Sutherland (University of Kassel); Jannik Löhn (TU Dortmund University); Sigrid Wenzel (University of Kassel); and Jakob Rehof (TU Dortmund University, Lamarr Institute for Machine Learning and Artificial Intelligence) Abstract AbstractDespite relevant research endeavors, modeling efforts related to the building of discrete-event simulation models for planning changeable material flow systems still limit their practical application. This is because simulation experts have to model many possible structural variants and compare them based on key performance indicators such as throughput, workload or investment costs, while also ensuring sufficient system changeability. This article presents a methodology for reducing efforts for structural variation during the experimental phase of a simulation study. Starting from a valid initial simulation model, structural variants of this simulation model are automatically generated by applying component-based software synthesis which uses combinatorial logic; thereby, a range of simulation models is provided for the user. This paper presents the outlined methodology using a case study and places it in the research context of reducing efforts associated with the design and execution of simulation experiments. pdf Manufacturing and Industry 4.0Energy-Aware Models Chair: Madlene Leißau (University of Applied Sciences Zwickau)
Data-Driven Extraction of Simulation Models for Energy-Oriented Digital Twins of Manufacturing Systems: An Illustrative Case Study Atieh Khodadadi (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology; The Maersk Mc-Kinney Moller Institute, University of Southern Denmark) Tags: data driven, process mining, digital twin, manufacturing, Python Abstract AbstractManufacturing systems, as significant energy consumers and potential contributors to energy efficiency optimization, play an important role in addressing global energy challenges. Digital Twins (DT) utilize available data from smart manufacturing systems (SMS) to effectively understand and replicate the system's energy-related behaviors. DTs facilitate detailed systems analysis and enable decision support for optimizing energy efficiency through various relevant what-if scenarios. In this paper, we propose the methodology for data-driven extraction of simulation models for Energy-Oriented DTs (EODTs) of SMSs. Through a case study of a data-driven EODT for an assembly process of a quadcopter drone part, we illustrate our initial methodology and the data requirements. Our case study helps comprehend the complexity of extracting EODTs in SMSs, offering insights into the integration of production and energy-related processes and behaviors of the system. pdfEnergy Price and Workload Related Dispatching Rule: Balancing Energy and Production Logistics Costs Balwin Bokor, Wolfgang Seiringer, and Klaus Altendorfer (University of Applied Sciences Upper Austria) and Thomas Felberbauer (St. Pölten University of Applied Sciences) Abstract AbstractIn response to the escalating need for sustainable manufacturing practices amid fluctuating energy prices, this study introduces a novel dispatching rule that integrates energy price and workload considerations with Material Requirement Planning (MRP) to optimize production logistics and energy costs. The dispatching rule effectively adjusts machine operational states, i.e. turn the machine on or off, based on current energy prices and workload. By developing a stochastic multi-item multi-stage job shop simulation model, this research evaluates the performance of the dispatching rule through a comprehensive full-factorial simulation. Findings indicate a significant enhancement in shop floor decision-making through reduced costs. Moreover, the analysis of the Pareto front reveals trade-offs between minimizing energy and production logistics costs, aiding decision-makers in selecting optimal configurations. pdf Manufacturing and Industry 4.0Manufacturing Systems Modeling Chair: Deogratias Kibira (National Institute of Standards and Technology, University of Maryland)
Discovering Simulation Models from Labor-intensive Manufacturing Systems Manuel Götz and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Tags: data driven, Petri Nets, process mining, digital twin, manufacturing Abstract AbstractSimulation modeling has become essential in industries for enhancing processes, improving efficiency, and mitigating risks within manufacturing systems. However, the automatic discovery of these models remains challenging, particularly in labor-intensive manufacturing systems (LIMSs), which are wide-spread in industries like food or apparel manufacturing. LIMSs, characterized by dominant direct in-volvement of human operators throughout the value chain, present unique complexities. Here, we inves-tigate state-of-the-art modeling approaches for capturing behaviors of human operators in LIMSs and ex-amine their implications for extracting corresponding simulation models. Specifically, we employ these insights to automatically extract a simulation model of LIMSs as a stochastic Petri net (SPN): this SPN explicitly models operators' fatigue and its impact on task durations. Our research contributes to laying the groundwork for developing Digital Twins for LIMSs. By automating model creation and ensuring continuous updates, our approach facilitates the automatic adaptation of simulation models to reflect changes in the system. pdfDiscrete Events Simulation of a Manufacturing Line for Floating Wind Turbines Diego Crespo-Pereira (University of A Coruna) and Santiago Bueno-Infantes, Daniel Molero-Medina, and Sara Pereira-de la Infanta (Seaplace S.L.) Tags: discrete-event, logistics, FlexSim Abstract AbstractThis paper presents the simulation model developed for designing and optimizing a mass manufacturing plant of floating offshore wind units. The floating wind technology used for the analysis is CROWN FW®, a competitive solution made of concrete and steel deck. To respond to the huge demand of the industry, mass production is necessary and the optimization of the construction sites and the logistic procedures highly valuable. The manufacturing line involves three workstations that produce floaters, two turbine assembly workstations and in land workstations for preassemblies. The simulation model has been implemented in Flexsim. The model has been parameterized to allow testing different procurement scenarios, task durations and planning of a 50 units program. It includes the effect of meteorological conditions like the windspeed or the tides height which can cause interruptions of some workstations. The results obtained in a first optimization run are discussed. pdfSynthetic Simulated Environment for Discrete Manufacturing Systems: A Demonstrator through a Computational Modeling Approach Silvan Marti (Chalmers University of Technology); Paulo Victor Lopez (Chalmers University of Technology, Aeronautics Institute of Technology); Siyuan Chen, Mohan Rajashekarappa, and Elham Rekabi Bana (Chalmers University of Technology); Amon Göppert (RWTH Aachen University); and Mélanie Déspeisse, Johan Stahre, and Björn Johansson (Chalmers University of Technology) Abstract AbstractIn light of the challenges posed by the often unavailability of coherent data in manufacturing for operational Artificial Intelligence (AI) decision support systems, the generation and utilization of synthetic datasets have become essential. This study introduces a simple numerical Synthetic Simulated Environment (SSE) using timed and parametrizable Petri Net (PN) modules, embedded in a Directed Acyclic Graph (DAG) structure described by an adjacency matrix to represent material flow. Implemented in PyTorch for seamless integration with AI components, our simulation framework simplifies manufacturing systems, yet remains expandable for diverse use cases. The simulation model was demonstrated displaying its capability of generating synthetic data. This approach explores the practicality and applicability of generated data. It could serve as an ideal environment to benchmark Artificial Intelligence (AI) algorithms in comparative experiments, investigating operational problems featured in the dynamic interactions of discrete manufacturing systems. pdf Manufacturing and Industry 4.0Production Scheduling Chair: Christin Schumacher (TU Dortmund University)
Assessing Scheduling Strategies for a Shared Resource for Multiple Synchronous Lines Harshita Parasrampuria and Russell Barton (Pennsylvania State University) Tags: discrete-event, manufacturing, Simio, student Abstract AbstractThis study uses discrete-event simulation to explore scheduling policies for a shared resource across three synchronous manufacturing lines. The objective is to enhance operational efficiency and reduce blocking and starving downtime. Scheduling for synchronous environments is a less explored area compared to asynchronous systems. Simulation experiments compare the performance of five easy-to-implement scheduling strategies: First-In-First Out(FIFO), Upstream Priority, Downstream Priority, Random Selection, and Round Robin. The Round-Robin method is commonly used in CPU and computer network scheduling. The research assesses the impact of these policies on manufacturing efficiency. Scenarios include random station breakdowns. Statistical analysis identifies FIFO and Round Robin strategies as notably effective, providing significant insights for optimizing scheduling in manufacturing operations. This result can help set a scheduling strategy for future development of a digital twin model, potentially incorporating machine learning. pdfCyber-Physical Production System Framework for Production Scheduling in Smart Factories Ivan Arturo Renteria-Marquez and Jose Carlos Garcia Marquez Basaldua (University of Texas at El Paso), Oswaldo Aguirre (Texas A&M University), and Bryan Eduardo Lara-Medrano and Tzu-Liang (Bill) Tseng (University of Texas at El Paso) Tags: cyber-physical systems, manufacturing, Matlab, Simio, student Abstract AbstractSmart manufacturing refers to manufacturing systems composed of automatic processes interconnected through the internet of things. Likewise, a smart factory is a core concept of smart manufacturing that refers to an automatic production system that is fully connected, where data is collected and analyzed to make informed decisions. Moreover, smart factories are based on the coexistence of a physical and virtual factory. Hence, the development of a cyber-physical production system framework to merge the physical and virtual factory is motivated. This manuscript describes the integration of a supervisory control and data acquisition system with a factory digital twin, which allows production floor remote data gathering, scheduling optimization, and production floor machines remote control. A scenario of a painting shop is presented to illustrate the performance of the cyber-physical production systems framework. The results exemplify the potential benefits to the decision-making process when implementing the proposed framework. pdf
MASM: Semiconductor Manufacturing Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Hyun-Jung Kim (KAIST), Lars Moench (University of Hagen) MASM: Semiconductor ManufacturingSemiconductor Manufacturing Operations I Chair: Hyun-Jung Kim (KAIST)
Enhancement of Vendor-Managed Inventory Planning Through Deep Reinforcement Learning Marco Ratusny and Jee Hyung Kim (Infineon Technologies AG), Hajime Sekiya (Fernuniversität Hagen), Maximilian Schiffer (Technische Universität München), and Hans Ehm (Infineon Technologies AG) Tags: data analytics, machine learning, semiconductor, supply chain, Python Abstract AbstractWe explore the application of Twin Delayed Deep Deterministic Policy Gradient (TD3), a Deep Reinforcement Learning (DRL) algorithm, for optimizing Vendor-Managed Inventory (VMI) systems in the semiconductor industry. We introduce a novel multi-scenario DRL algorithm with a continuous action space, designed to effectively manage diverse product/customer combinations, thereby improving VMI performance. We evaluate our algorithm’s efficacy on three distinct products as well as 100 product/customer combinations for the multi-scenario approach. A sensitivity analysis examines the effects of varying shipment penalties on the percentage of no violations (PNV) and shipments. Our findings indicate that our DRL-based VMI model significantly surpasses existing policies used in the semiconductor industry by five
percentage points. pdfSupporting Fab Operations using Multi-Agent Reinforcement Learning Ishaan Sood, Abhinav Kaushik, Tom Bulgerin, Abdelhak Khemiri, and Jasper van Heugten (minds.ai); Johnny Chang and Sam Hsu (Micron); and Jeroen Bédorf (minds.ai) Abstract AbstractOver recent years semiconductor operations have grown in scope, dynamics, and complexity. Consequently, advanced scheduling algorithms and manufacturing engineers can no longer quickly estimate the best performing schedule. In this paper we present how machine learning, in real time, can be used to augment and support manufacturing engineers. The presented results, obtained from production deployments in Micron’s fabs, show that AI assisted scheduling can improve Key Performance Indicators with little to no downside. pdfMaintenance Planning with Deterioration by a Reinforcement Learning Approach - a Semiconductor Simulation Study Cas Leenen (Soitec), Michael Geurtsen (ITEC), and Ivo Adan and Zumbul Atan (Eindhoven University of Technology) Tags: discrete-event, optimization, digital twin, manufacturing, semiconductor Abstract AbstractManufacturing companies are often faced with deteriorating production systems, which can greatly impact their overall performance. Scheduling the preventive maintenance activities optimally is vital for these companies. This study utilizes historical data on product quality deterioration in the last machine of a real-world serial, buffered production line to refine maintenance decisions. Currently, maintenance intervals for the last machine are fixed. The objective is to improve this policy by increasing the production system's long-term throughput. A simplified two-machine-one-buffer (2M1B) system is modeled to devise and assess policies. Various optimization techniques are applied, with the average reward Reinforcement Learning (RL) technique showing the most promising results in numerical experiments. The RL-optimized policy exhibits significant potential by considering product quality deterioration, buffer levels ahead of the last machine, and the machine states of the last two machines. pdf MASM: Semiconductor ManufacturingSemiconductor Dispatching and Scheduling Chair: Christoph Laroque (University of Applied Sciences Zwickau)
Reinforcement Learning for Unrelated Parallel Machine Scheduling with Release Dates, Setup Times, and Machine Eligibility Sang-Hyun Cho and Hyun-Jung Kim (KAIST) and Lars Mönch (University of Hagen) Tags: machine learning, manufacturing, semiconductor, student Abstract AbstractThis paper presents a novel approach for solving unrelated parallel machine scheduling problems through reinforcement learning. Notably, we consider three main constraints: release date, machine eligibility, and sequence- and machine-dependent setup time to minimize total weighted tardiness. Our work presents a new graph representation for solving the problem and utilizes graph neural networks combined with reinforcement learning. Experimental results show that our proposed method outperforms traditional dispatching rules and an apparent tardiness cost-based algorithm. Furthermore, since we graphically represent and solve the problem, our method can be used regardless of the number of jobs or machines once trained. pdfEnhancing Machine Learning for Situation Aware Dispatching through Generative Adversarial Network Based Synthetic Data Generation Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd) and Wentong Cai (Nanyang Technological University) Tags: machine learning, semiconductor, D-SIMCON, Python, student Abstract AbstractAdapting dispatch rules via machine learning in a complex manufacturing environment has shown overall factory performance in various studies. However, the performance of the machine learning model depends on the training data. Limited data could reduce the prediction accuracy of the machine learning model, thereby negatively influencing the overall factory performance. Addressing this, we generate synthetic data for the lot attributes, simulate it through a discrete event simulator, and use the resulting data to improve the prediction accuracy for the machine learning model. We evaluate three synthetic data generation approaches: Latin Hypercube, Synthetic Minority Oversampling Technique, and Generative Adversarial Networks (GAN), demonstrating GAN suitability for synthetic data generation. To validate our approach, we apply two evaluation processes: Train on Real, Test on Real, and Train on Synthetic, Test on Real, showing the improved predictive accuracy of the machine learning model when trained with synthetic data. pdfScheduling Jobs on a Single Stress Test Machine in a Reliability Laboratory Jessica Hautz (KAI), Andreas Klemmt (Infineon Technologies AG), and Lars Moench (University of Hagen) Abstract AbstractWe consider a single-machine scheduling problem with unequal job sizes and ready times. Several jobs can be processed at the same time on the machine if the sum of their sizes does not exceed the capacity of the machine. Only jobs of the same family can be processed at the same time. The machine can be interrupted to start a new job or to unload a completed job. A conditioning time is required to reach again the temperature for the stress test. The machine is unavailable during conditioning. Jobs that cannot be completed before conditioning have to continue with processing after the machine is available again. The makespan is to be minimized. A mixed-integer linear program and a constraint programming formulation are established, and a constructive heuristic and a biased random-key genetic algorithm are designed. Computational experiments based on randomly generated problem instances demonstrate that the algorithms perform well. pdf MASM: Semiconductor ManufacturingSemiconductor Planning Chair: Lars Moench (University of Hagen)
Deployment of a Novel Production Planning and Prescriptive Analytics Solution: a Seagate Technology Use Case Sebastian Steele, Semya Elaoud, Ioannis Konstantelos, and Dionysios Xenos (Flexciton Ltd) and Tina O’Donnell and Sharon Feely (Seagate Technology) Abstract AbstractAn emerging paradigm in semiconductor manufacturing is the “autonomous fab.” Companies such as Seagate aim to make decision-making autonomous and aligned at all levels, from supply chain planning to tool scheduling, to cope with ever-increasing complexity and labor supply constraints. One identified gap is between committed deliveries in Enterprise Resource Planning (ERP) systems, and day-to-day execution, where daily production targets and wafer starts are often poorly aligned with shipment targets. Our novel production planning system creates optimized production targets to manage on-time delivery (OTD), line linearity, dynamic bottlenecks and starvation. It also provides proactive, quantified actions for improving fab KPIs. The system thus surpasses existing descriptive and predictive analytics solutions (answering “what happened?” and “what will happen?”), providing a prescriptive solution which “makes it happen.” pdfCapacity Planning Accuracy and the Effect of Dynamic Dedication Changes for a Single Wafer Lot Semiconductor Factory Richard Surman, Matt Nehl, and Cole Evanson (Seagate Technology) and Soo Leen Low, Kern Chern Chan, Hui Sian Liau, and Boon Ping Gan (D-SIMLAB Technologies) Abstract AbstractIn the context of a single wafer lot semiconductor factory characterized by high levels of Research and Development (RD) work-in-progress (WIP), low levels of product-based lots and lengthy cycle times, this paper investigates the accuracy of capacity planning given the impact of dynamic changes in dedication. We delve into several critical aspects related to dedication planning, drawing insights from historical data and dispatch logic used. The experimental results show the improvement in model accuracy with the incorporation of dedication changes as distribution functions. pdfLong-term Rapid Scenario Planning in the Semiconductor Industry using Deep Reinforcement Learning Bibi Helena Emma Agnes de Jong (NXP, TUe); Willem Van Jaarsveld and Riccardo Lo Bianco (TUe); and Kai Schelthoff (NXP) Tags: agent-based, machine learning, neural networks, C++, student Abstract AbstractThe creation and maintenance of effective production plans is a central problem in supply chain planning.
NXP Semiconductors N.V. uses a Mathematical Programming (MP) model to generate offline production
plans on both short and long-term horizons. However, production plans need online updates in response
to unforeseen events. This process is carried out manually, making the updated plans not optimal. This
paper proposes the use of Deep Reinforcement Learning (DRL) as a method to produce close-to-optimal
updated plans. DRL is suitable for large-scale problems and can produce close-to-optimal scheduling
solutions quickly, whereas MP models ensure optimality at the expense of high computational complexity.
The performance of DRL is compared against traditional optimization methods and a MP model using a
simulation model that mimics NXP’s situation. The results of this study highlight the usefulness of DRL as
a tool for short and long-term decision-making in supply chain planning within the semiconductor industry. pdf MASM: Semiconductor ManufacturingSemiconductor Supply Chains Chair: Michael Hassoun (Ariel University)
Borderless Fab Scenarios in Hierarchical Planning Settings: A Simulation Study Raphael Herding (FTK) and Lars Moench (FTK, University of Hagen) Abstract AbstractLots are transferred in borderless fab (BF) scenarios from one wafer fab to another nearby fab to carry out process steps of the transferred lots. BF aims to compensate for scarce bottleneck capacity in some of the wafer fabs. One goal of the master planning function is to distribute the demand over the wafer fabs such that situations are avoided where large queues of lots arise in the wafer fabs. Due to inaccurate modeling of capacities and lead times in master planning, this goal is not always reached. Wafer fabs are often heretogenous. This leads to additional costs for BF scenarios which might make them less attractive. We are interested in exploring conditions with respect to master planning and wafer fab heterogenity under which BF scenarios are still beneficial. Master planning, production planning, and the BF lot transfers are carried out in a rolling horizon setting using a cloud-based infrastructure. pdfTransportation Product Carbon Footprint: A Framework for Semiconductor Supply Chain Youlim Son (Infineon Technologies AG, Technical University of Munich) and Woo-Jin Ko, Philipp Ulrich, Rabia Sarilmis, and Hans Ehm (Infineon Technologies AG) Tags: conceptual modeling, semiconductor, transportation, Python, industry Abstract AbstractThis paper presents a comprehensive framework for modeling the Product Carbon Footprint (PCF) associated with the transportation activities within the semiconductor industry, specifically focusing on the gate-to-gate supply chain. The study reveals that the mode of transportation, distance traveled, and weight of shipments, particularly the weight added by packing, significantly influence the transportation PCF. Despite the inherent complexities and dependencies within the semiconductor supply chain, this research provides a robust foundation that increases transparency and facilitates comparable studies across the industry. The adaptability of the methodology shows potential for broader applications, contributing to sustainability efforts across related sectors such as the electronics industry. The study emphasizes the step-by-step methodology over quantification of carbon emissions, thereby contributing to industry-wide efforts to understand and mitigate environmental burdens. pdfOrder Lead Time Influencing Factors in the Semiconductor Supply Chain Fabian Gassner, Patrick Moder, and Marco Ratusny (Infineon Technologies AG) Tags: big data, data analytics, machine learning, semiconductor, supply chain Abstract AbstractDetermining and communicating reliable order lead time information is vital to retain customers and increase supply chain resilience, especially in complex settings such as the semiconductor supply chain. To overcome shortcomings of recent research on order lead time and its influencing factors, we collect order data from a global semiconductor manufacturer that captures both make-to-plan and make-to-order information within the internal supply chain. Our contribution is twofold. First, our results support an accurate prediction of the confirmed order lead time and thus its reliable communication to customers. We develop three linear regression models: one general, one conventional, and one model for particular situations where standard delivery times are hard to determine. Second, we develop three specific managerial implications by analyzing influencing factors that highly correlate with the confirmed order lead time and capture information on the customer request, order fulfillment details, and product specifics. pdf MASM: Semiconductor ManufacturingSemiconductor Manufacturing Operations II Chair: Stéphane DAUZERE-PERES (Ecole des Mines de Saint-Etienne)
Yield Improvement Using Deep Reinforcement Learning for Dispatch Rule Tuning David Norman, Prafulla Dawadi, and Harel Yedidsion (Applied Materials) Tags: neural networks, semiconductor, AutoSched, industry Abstract AbstractIn this paper we discuss improving yield in semiconductor manufacturing using reinforcement learning (RL) to tune a dispatching rule parameter to increase the number of lots that process on high-yield equipment. We consider a dispatching rule with a parameter that controls whether or not the rule allows a lot to process on a lower-yield equipment or waits to allow the lot to possibly process later on a high-yield equipment. In a factory such a parameter would be set periodically, e.g., once a week, but RL allows the parameter to be updated frequently, leading to better factory performance. We also consider a novel measure of on-time delivery where the goal is to have 95% on-time delivery in a set of time intervals, e.g., shifts. We show how a trained RL agent using a graph neural network outperforms the baseline by maintaining on-time delivery while processing significantly more lots on high-yield equipment. pdfApplication of Drum Rate Control and Multiobjective Optimization in Scheduling Semiconductor Manufacturing Facilities Igor Kuvychko (INFICON), Bennett Poganski (Polar Semiconductor LLC), and Josh Mangahas and Matthew Purdy (INFICON) Abstract AbstractJob shop scheduling is a famously difficult NP-hard problem. Scheduling of semiconductor manufacturing factories adds many additional challenges such as batching, queue timers, setup changes, and others. The size of modern semiconductor factories necessitates problem decomposition to control the combinatorial complexity. It is typical to split a factory into multiple scheduling areas with similar tools (e.g., diffusion furnaces, or implanters) to make it computationally tractable. Any decomposition introduces a problem of optimal control of local schedulers to optimize global factory KPIs. A practical production schedule must balance multiple and often conflicting performance criteria. This makes factory scheduling a multiobjective optimization problem with their additional complexity. This paper presents an approach of using local wafer out goals (drum goals) for controlling local scheduling to satisfy global fab criteria. This is combined with multi-objective optimization to find and evaluate Pareto-optimal solutions and strike a balance between conflicting objectives. pdfComparison Study to Evaluate the Relationship Between Equipment Uptime Variability Metrics and Cycle Time Chris Keith, Maryam Anvar, and Marino Arturo (Applied Materials) Tags: discrete-event, semiconductor, wafer fab, industry Abstract AbstractThis paper evaluates several measures of equipment uptime variability to determine which most closely track with the cycle time in a semiconductor wafer fab. The analysis is focused on a fleet of similar process tools performing the same function in parallel and the resulting cycle time of lots running through those tools. Discrete-event simulation is used to study the relationship between cycle time and several uptime variability metrics, including coefficient of variation (CV), A80, and A20-A80 for different combinations of fleet size (numbers of tools) and levels of uptime variability based on a range of tool mean time to fail (MTTF) and mean time to repair (MTTR). For the scenarios analyzed, the CV of shift uptime is found to correlate with cycle time as well or better than the other studied variability metrics. pdf MASM: Semiconductor ManufacturingSemiconductor Manufacturing Operations III Chair: Hans Ehm (Infineon Technologies AG)
Leveraging Machine Signals For Device-Level Quality Detection And Automatic Root Cause Analysis In Semiconductor Wire Bonding Kenneth J. Braakman (NXP Semiconductors N.V., Eindhoven University of Technology); D. Martin Knotter (NXP Semiconductors N.V.); and Alp Akcay and Ivo Adan (Eindhoven University of Technology) Tags: data driven, machine learning, semiconductor, student, industry Abstract AbstractThis paper focuses on leveraging machine signal data from wire bond machines by building a data-driven solution to enhance root cause analysis efficiency and real-time quality control in semiconductor wire bonding. Traditional root cause analysis is time-consuming, labor intensive and performed in hindsight. We performed experiments at NXP Semiconductors N.V. that mimicked wire bonding problems caused by forming gas, and subsequently used the resulting real-world data to overcome the traditional root cause analysis challenges. We show that random forest classification models can successfully differentiate between standard and problematic wire bond manufacturing conditions, identifying significant machine-signal-related features associated with forming gas issues. The study demonstrates the effectiveness of linking machine-signal-related features to root causes, enabling proactive detection of potential failures during wire bonding. pdfAnalyzing the Trade-off between Quality and Sojourn Time When Optimizing Sampling Plans in Semiconductor Manufacturing Stéphane Dauzère-Pérès (Mines Saint-Étienne, Univ Clermont Auvergne) and Michael Hassoun (Ariel University) Tags: sampling, manufacturing Abstract AbstractInspired by semiconductor manufacturing, this paper studies a system where the products processed on multiple production machines are sampled to be measured on a single metrology tool. In a previous research, we show that minimizing the expected number of defective products is not ensured by using the metrology tool at its maximum capacity, as it induces a congestion that impacts the expected product loss. However, the congestion of the metrology tool also impacts the expected sojourn time of products in metrology. Hence, in this paper, we analyze the trade-off between the expected product loss and the expected sojourn time when deciding how much of the metrology capacity should be used. Numerical results show that, depending on some parameters, the expected sojourn time can significantly be reduced without increasing much the expected product loss. pdfSpline Interpolation-Based Multi-Scale Model for Etching in a Chlorine-Argon Inductively Coupled Plasma Lado Filipovic and Tobias Reiter (Institute for Microelectronics, TU Wien; CD Lab for Multi-Scale Process Modeling of Semiconductor Devices and Sensors) Tags: digital twin, semiconductor, C++, Python, student Abstract AbstractA multi-variable spline interpolation is used in order to quickly obtain the wafer incident fluxes from inside an inductively coupled plasma chamber with a chlorine and argon chemistry. The data is obtained by performing a total of 18\,750 chamber simulations while varying the inputs which include the inductive coil power, the gas flow rate, the chamber pressure, the chlorine gas concentration ratio, the temperature, and the applied bias voltage. The data is used to generate a six-dimensional hypercube for each flux of interest where each dimension corresponds to one of the varied parameters. The fluxes are then incorporated into a semi-empirical feature scale plasma etching model in the process simulation tool ViennaPS. The chemical sticking coefficient is derived from measurements and the model includes the etch rate resulting from the chemical etch component, ion sputtering, and ion-enhanced etching. pdf MASM: Semiconductor ManufacturingMASM Keynote Chair: John Fowler (Arizona State University)
Review of Past Research and Reflections on Digital Thread/Twin Affordability in (Semiconductor) Supply Chain and Manufacturing Cathal Heavey (University of Limerick) Abstract AbstractThis presentation will review past research projects on improving operations of manufacturing and supply chain systems with the goal of highlighting future research challenges. Several of these projects were in semiconductor manufacturing and supply chains. These past research projects were funded nationally, by the EU and industry. It will document several research challenges on topics of Model Based Systems Engineering (MBSE), optimization using online machine learning metamodels, simulation analysis of Advanced Planning System Analysis (APS), and supplier selection. Finally, the presentation will reflect on the role of Digital Thread/Twin and the feasibility of this approach with the current availability of methodologies, computing, and human resources. pdf MASM: Semiconductor ManufacturingSemiconductor Digital Twins Chair: Hyun-Jung Kim (KAIST)
Breaking Barriers in Semiconductor Simulations: An Automated Low-Code Framework for Model-Structure Synchronisation and Large-Scale Simulation Studies Madlene Leißau and Christoph Laroque (University of Applied Sciences Zwickau) Abstract AbstractThe paradigm shift towards Industry 4.0 and the emerging trends of Industry 5.0 present ongoing challenges in production planning and control. In response to these dynamics, discrete event-driven simulation methods are gaining prominence as an operational decision-support-tool, particularly in the semiconductor industry. This paper introduces an automated low-code framework designed to synchronize model structures across simulation tool boundaries for extensive simulation studies, using the Semiconductor Manufacturing Testbed 2020 as a test reference, and aims to serve as a helpful tool for simulation experiments. Key aspects include model structure synchronization, Design of Experiments, and the distributed execution of large-scale simulation studies. pdfAggregated Simulation Modeling to Assess Product-Specific Safety Stock Targets During Market Up- and Downswing: A Case Study Cas Rosman and Eric Thijs Weijers (NXP Semiconductors N.V., Eindhoven University of Technology); Kai Schelthoff (NXP Semiconductors N.V.); and Willem van Jaarsveld, Alp Akcay, and Ivo Adan (Eindhoven University of Technology) Tags: discrete-event, input modeling, output analysis, semiconductor, supply chain Abstract AbstractIn this study, we propose an aggregated simulation model of the back-end supply chain for manufacturing semiconductors. The simulation model is applied to real-world data from NXP Semiconductors N.V. to assess the need for safety stock at the die bank during market up- and downswings, respectively. To model demand uncertainty, we use future forecasts and adjust them by sampling from discrete distributions of historical forecast errors. For modelling supply, we propose an aggregated simulation model of the back-end supply chain and assume a front-end process that produces to forecast (Make To Stock) with the addition of safety stock. We conclude that during market up- or downswings, the impact of safety stock target levels on supply chain performance differs significantly. The proposed method allows supply chain managers to assess the impact of safety stock target levels on key performance indicators. pdfDigital Twin Based Uncertainty Informed Time Constraint Control in Semiconductor Manufacturing Marvin Carl May, Lars Kiefer, and Gisela Lanza (Karlsruhe Institute of Technology) Tags: data driven, discrete-event, digital twin, semiconductor, student Abstract AbstractSemiconductor manufacturing, commonly described as a complex job shop, contains product-inherent constraints that amplify dispatching complexities. Most notably time constraints restrict the maximum waiting time between processes inducing the need to control the release of time constraint lots as violations lead to scrap. The proposed approach uses a fab wide digital twin in form of discrete event simulation based on knowledge graph structure and derives uncertainty informed time constraint violation probabilities through rollouts in near real time. A real-world front end semiconductor manufacturing fab serves as an ex-post validation and shows the benefits of the approach. pdf MASM: Semiconductor ManufacturingAI Applications in Semiconductor Manufacturing Chair: Andreas Klemmt (Infineon Technologies Dresden GmbH)
About AI-Based Real-Time Dispatching as Compared to Optimized Scheduling in Semiconductor Manufacturing Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Abstract AbstractWith the rise of Artificial Intelligence (AI) in recent years, AI techniques have also become a hot topic in Semiconductor Manufacturing. This presentation will look at Reinforcement Learning enabled WIP flow management and discuss why some recent advancements in the field of AI-enabled dispatching have to be taken with caution and why such techniques bear much more potential for production scheduling instead. pdfEnhanced Ontology Extraction: Integrating GPT AI with Human Knowledge on the Example of EU Standards Related to Semiconductor Supply Chains George Dimitrakopoulos (Harokopio University), Hans Ehm (Infineon Technologies AG), and Eleni Tsaousi (Harokopio University) Abstract AbstractThis paper addresses challenges in creating ontologies for the semiconductor supply chain. Ontologies are crucial for seamless data exchange within the semantic web, enabling initiatives like GAIA-X and CatenA-X. Traditionally, ontology creation is complex. Here, we propose a novel AI-assisted method using large language models (LLMs) like ChatGPT 4 Turbo to support human experts. This collaboration aims to expedite ontology generation while maintaining quality. While initial tests show promise, refining the human-AI interface for clear content generation remains a focus. By improving this collaboration, we expect to create more accurate and complete ontologies, fostering efficient information sharing and strengthening the meaningfulness of standards within the semiconductor supply chain. pdfA Standard Framework for AI-driven Optimizations in Various Complex Domains Tobias Bosse (Robert Bosch GmbH, Karlsruhe Institute of Technology); Evangelos Angelidis (Robert Bosch GmbH); Feifei Zhang, Chew Wye Chan, and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd); and Matthias Werner and Andrej Gisbrecht (Robert Bosch GmbH) Abstract AbstractDeploying AI in production is a challenging task that necessitates the integration of various standalone software solutions, including Manufacturing Execution System, Simulation, and AI. The process of discussing, defining, and implementing interfaces requires both time and financial resources. In this paper, we propose a standardized interface framework that not only facilitates the training of Deep reinforcement learning agents in the semiconductor industry but also offers flexibility, extensibility, and configurability for other use cases and domains. To achieve this, we present OpenAPI interface definitions that enable automatic code generation, along with a set of data schemas that adequately describe the semiconductor domain for most use cases. By adopting this framework, the coupling between software modules is reduced, enabling researchers to work independently on their solutions. Moreover, it ensures compatibility among modules, allowing for plug and play functionality, and simplifies the deployment process in production. pdf
Military and National Security Applications Track Coordinator - Military and National Security Applications: Gonzalo Hernando (Air Force Institute of Technology), James Starling (U.S. Military Academy) Military and National Security ApplicationsMilitary Keynote: Combat and Complexity Chair: James Starling (U.S. Military Academy)
Combat and Complexity: Using Modeling and Simulation to Understand the Implications for the Next War Andreas Tolk (The MITRE Corporation) Abstract AbstractIdeas of complexity can be found in the works of Sun Tzu and Clausewitz, so complexity in combat is not a new concept the military community and supporting modeling, simulation, and analysis experts must deal with. However, the amount of complexity increases. Early weapon systems did not reach beyond the direct control of the user. The battlefield could be delimited using organizational boundaries defining areas of responsibility assigned to local units. Today’s weapon effects reach beyond the control of the user. Areas of responsibility overlap. Unit boundaries are no longer efficient, but collaboration in the overlapping areas is needed. Today’s military operations increasingly rely upon joint, coalition, allied, and combined multilateral forces that are optimized, and task organized. Air, land, sea, space, and cyber operations are being tied together on a multidomain battlefield characterized by non-linear operations in a networked kill web. Such kill webs provide a new from of operational agility that is far beyond current capabilities, but also requires new degrees of weapon system interoperability and a new concepts for battle management command and control. This presentation shows implications for the next war and recommends a closer collaboration with the complex adaptive systems community to benefit from their methods and tools. pdf Military and National Security ApplicationsReinforcement Learning and AI in Simulation Chair: Seunghan Lee (Mississippi State University)
Offline Reinforcement Learning for Autonomous Cyber Defense Agents Alexander Wei, David Bierbrauer, Emily Nack, John Pavlik, and Nathaniel Bastian (United States Military Academy) Abstract AbstractAdvanced Persistent Threats (APTs) present an evolving challenge in cybersecurity through increasingly sophisticated behavior. Cybersecurity Operations Centers (CSOCs) rely on standard playbooks to respond to myriad cyber threats; however, such methods quickly become outdated, leaving sensitive data at significant risk for theft and exploitation. To provide CSOCs with an advantage, playbooks capable of adapting to threats and environments must be developed for use with modern security orchestration and automation tools. Our methodology proposed herein trains autonomous cyber defense agents through offline reinforcement learning (RL) to address this need. Using the scalable, tailorable Cyber Virtual Assured Network, we simulate an APT conducting data exfiltration to then train an agent using offline RL and compare performance against a myopic policy. Initially, we see improvements in preventing exfiltration while ensuring authorized user access, but it is clear the agent must periodically retrain to account for changing adversarial behaviors. pdfCurriculum Interleaved Online Behavior Cloning for complex Reinforcement Learning applications Michael Möbius, Kai Fischer, and Daniel Kallfass (Airbus Defence and Space GmbH); Stefan Göricke (Army Concepts and Capabilities Development Centre, Bundeswehr); and Thomas Manfred Doll (Joint Support Service Command, Bundeswehr) Tags: machine learning, military, industry, government Abstract AbstractThis paper introduces Curriculum Interleaved Online Behavior Cloning (IOBC) as an approach to train agents for military operations, addressing not only the challenges posed by complex and dynamic combat scenarios but also how military doctrines and strategies are transferred to these agents. It highlights the limitations of traditional reinforcement learning (RL) methods and proposes interleaved online behavior cloning in combination with curriculum learning as a solution to enhance RL agent training. By leveraging rule-based agents for guidance during training, IOBC accelerates learning and improves the RL agent's performance, particularly in early stages of training and complex scenarios. The study conducted experiments using ReLeGSim, a reinforcement learning-focused simulation environment, demonstrating the effectiveness of our method in enhancing agent performance and scalability. Results indicate that IOBC significantly outperforms RL agents without guidance, providing a stable foundation for learning in challenging environments. These findings underscore the potential of IOBC in real-world military applications. pdfAI-Driven Physics-based Simulation Design with Optical Flow Based Markov Decision Processes for Smart Surveillance Seunghan Lee (Mississippi State University), Yinwei Zhang (The University of Arizona), and Aaron LeGrand and Seth Gibson-Todd (Mississippi State University) Tags: data driven, machine learning, aviation, military, resiliency Abstract AbstractThis paper presents an integrated approach to enhancing situational awareness and decision-making in dynamic environments by combining optical-flow-based Markov Decision Processes (MDP) with physics-based simulations for proactive surveillance system design. By utilizing optical flow for real-time motion detection and analysis, our framework achieves immediate comprehension of environmental changes, which is essential for autonomous navigation and surveillance applications. Additionally, the framework employs MDP to model and resolve decision-making problems where outcomes are partly random and partly controlled by the decision-maker, optimizing actions based on predicted future states. The model was validated through Hardware-in-the-loop (HITL) simulations, providing a detailed understanding of the physical phenomena influencing the system. This ensures that decisions are data-driven and customized to specific situations and missions. Our approach offers a comprehensive understanding of system integration with AI, along with real-time analysis and decision-making capabilities, thereby advancing the simulation modeling methodology for engaging with complex, dynamic environments. pdf Military and National Security ApplicationsMilitary Workforce and Operational Planning Chair: Mehdi Benhassine (Royal Military Academy)
Experience Accumulation in Military Workforce Planning Jillian Anne Henderson (Defence Research and Development Canada), Cameron Pike (Defence Science and Technology Group), Slawomir Wesolkowski (Royal Canadian Air Force), and Rene Seguin (Defence Research and Development Canada) Tags: discrete-event, Monte Carlo, output analysis, verification, military Abstract AbstractWorkforce planning is at the core of military strategic planning. We focus on a critical aspect of many occupations in military workforces: the on-the-job training received by an inexperienced mentee under the supervision of an experienced mentor. We introduce the Experience Accumulation Module to allow the Athena Lite workforce modelling software to consider experience gained through on-the-job training. We compare four models and illustrate their differences with detailed results and analysis: (a) promotion requires minimum time in level; (b) promotion requires upgrade gained through physical resource usage; (c) promotion requires physical resource usage and Mentors; and (d) promotion through physical resource-constrained upgrade. We verify model implementation in Athena Lite by comparing our results from (a) through (c) with a well-known continuous workforce model. We then increase the complexity of our model in (d) and study the impact of resource levels on the upgrade process and the health of the population. pdfA Mentored Experience Accumulation Differential Model: Rapid Parameter Space Analysis Applied to Royal Canadian Air Force Pilot Production, Absorption and Retention Jack Quirion, Stephen Okazawa, Robert Mark Bryce, and Jillian Anne Henderson (Centre for Operational Research and Analysis) Tags: discrete-event, system dynamics, military, student, government Abstract AbstractSince the early 2000s, the Royal Canadian Air Force (RCAF) has used a detailed personnel training model—Pilot Production, Absorption, Retention Simulation (PARSim)—to study the progress of pilots from recruitment to release. The model captures key dynamics of pilot career throughput, with particular attention paid to the upgrade of inexperienced pilots arriving at operational squadrons via mentoring by experienced pilots. Here we develop a simplified model of the same career structure, based on systems of differential equations, that captures the fundamental dynamics and constraints of the full PARSim model but enables rapid analysis of the parameter space via numerical simulation to produce a higher level view of pilot occupation health. A further advantage of this model is that, within certain domains, the equations can be solved analytically which provides valuable insights into the system’s stability, steady state, and critical conditions in terms of the model’s fundamental parameters. pdfModeling Operational Demand for Canada’s Future Naval Fleet: A Case Study on Maintaining Expected Frequencies of Military Vignettes Lynne Serre and Lise Arseneau (Defence Research and Development Canada) Tags: discrete-event, Monte Carlo, military, Python, government Abstract AbstractThe Royal Canadian Navy is currently undertaking a fleet mix study to determine the optimal composition of its fleet to meet future operational requirements. The operational demand is modeled by generating possible future timelines of vignettes that can occur concurrently using a Monte Carlo discrete event simulation model. In this paper, we examine two aspects related to modeling operational demand stochastically: the number of replications required to ensure that the input frequencies of the vignettes correspond to a certain level of accuracy in the output frequencies from the simulation; and how the application of concurrency constraints to individual vignettes may lower the output frequencies if events are scheduled purely at random. We propose a modified scheduling algorithm that better maintains the input frequencies. Both aspects need to be taken into consideration to ensure that the modeled operational demands are representative of possible and probable futures for the Navy. pdf Military and National Security ApplicationsCybersecurity and Simulation of Cyber Threats Chair: Gonzalo Hernando (Air Force Institute of Technology)
Simulation of Low Earth Orbit Satellite Communication Data for Cyber Attack Detection Laila Mashayekhi and Michael E. Kuhl (Rochester Institute of Technology) Tags: cybersecurity, student Abstract AbstractSatellites play a crucial role in the global infrastructure of internet communication, commerce, and society. However, they are vulnerable to cyber-attacks, with distributed denial of service (DDoS) attacks being among the most prevalent threats. These attacks disrupt normal operations by flooding a network's resources. In response to the increasing frequency of such threats, there is a growing need for predictive methods of detecting bad actor signals. This paper proposes a stochastic logging system designed to capture and analyze key metrics associated with DDoS attacks on Low Earth Orbit (LEO) satellites, including uplink and downlink speeds, spectrogram speeds, and latency. By logging these measures, the system aims to identify abnormal signal activity. The methodology involves generating synthetic data representative of LEO satellite communication metrics and validating these values against predefined acceptable ranges. These datasets could then be used to develop methods for detecting and mitigating attacks on satellite networks. pdfDesign, Modeling and Simulation of Cybercriminal Personality-based Cyberattack Campaigns Jeongkeun Shin, Geoffrey B. Dobson, L. Richard Carley, and Kathleen M. Carley (Carnegie Mellon University) Tags: agent-based, behavior, complex systems, cybersecurity, student Abstract AbstractCybersecurity challenges are inherently complex, characterized by both advanced technical elements and complex aspects of human cognition. Although extensive research has explored how victims' human factors affect their susceptibility to cyberattacks, the influence of cybercriminals' personalities on the pattern of cyberattack campaigns and the resulting damage to organizations has not received equivalent attention. To bridge this research gap through computer simulation, we introduce a cyberattack campaign designer that enables modelers to construct cyberattack campaigns. This tool allows modelers to define how the process or pattern of the cyberattack campaign can change based on the cybercriminal's personality and simulates how these personality differences influence the magnitude of cyberattack damage on the target organization. In this paper, through computer simulation, we demonstrate how cautious and reckless personalities result in variations in the cyberattack pattern and, consequently, affect the magnitude of cyberattack damage, despite identical cyberattack objectives and techniques used at each step. pdfData Model and Simulation for Persistent Mission Planning with Energy-Sharing Autonomous Ground and Air Vehicles James Humann (DEVCOM Army Research Laboratory); Steven Carlos Ortega, James Glenn, and Jack L. Folsom (AFROTC); Subramanian Ramasamy, Md. Safwan Mondal, and Pranav Bhounsule (UIC); and Jean-Paul Reddinger and James Dotterweich (DEVCOM Army Research Laboratory) Tags: agent-based, aviation, logistics, military, Java Abstract AbstractWe introduce the Energy-Aware Mission Planning (EAMP) problem in the context of long-endurance robotic deployments using a mixture of ground and air vehicles. Unmanned ground and air vehicles have complementary strengths, namely long battery life in the ground vehicles and maneuverability in the air vehicles. To facilitate coordinated mission planning that allows the air vehicles to dock and recharge on the ground vehicles, we introduce a routing problem and specification of solver inputs and outputs. We then incorporate a solver from the literature to create plans for a realistic scenario, and show the results in a simulated 72 h deployment. pdf Military and National Security ApplicationsSimulation in Disaster and Tactical Response Chair: James Humann (Army Research Laboratory)
Discrete-event Simulation of the Disaster Response in the Aftermath of a Coordinated Unmanned Aerial Vehicle Strike in an Urban Area Mehdi Benhassine (Royal Military Academy), Ruben De Rouck (Vrije Universiteit Brussel), Michel Debacker (Vrije University Brussel), Ives Hubloue (Vrije Universiteit Brussel), John Quinn (Charles University), and Filip Van Utterbeeck (Royal Military Academy) Tags: discrete-event, complex systems, healthcare, military Abstract AbstractThe increasing use of suicidal and explosive Unmanned Aerial Vehicles (UAVs) poses a significant threat in both battlefield and urban environments, as evidenced by recent events in Ukraine. This study employs the SIMEDIS Simulator to simulate a triple UAV strike in Brussels City Centre, comparing two evacuation strategies: "Stay and Play" versus "Scoop and Run." The simulation incorporates real medical facility locations, bed capacities, and evolving patient conditions. Findings highlight the importance of rapid patient transport to surgical facilities, emphasizing the effectiveness of the "Scoop and Run" approach alongside timely medical interventions and adequate blood supply. Challenges such as hemorrhage control and managing multiple disaster sites are also discussed. This study underscores the necessity of efficient evacuation protocols and medical responses to mitigate casualties in urban disaster scenarios involving UAV attacks. pdfIntegrating Actual Human Behavior into an Agent-Based School Shooting Simulation Kevin Kapadia (University of Southern California), Nutchanon Yongsatianchot (Thammasat University), Stacy Marsella (Northeastern University), and Richard John (University of Southern California) Tags: agent-based, behavior, Netlogo, R, @Risk Abstract AbstractWith the ever-growing threat of school shootings, modeling these tragedies is crucial to mitigate or reduce casualties in future events. However, agent-based models traditionally instruct agents to act based on theoretical behavior rather than actual human behavior. We present the results of 81,000 simulations of a school shooting where agent behavior is modeled after actual human behavior from a similar scenario. Agents' reaction time and moving speed are drawn from probability distributions based on human data. The pathway assigned to agents is based on the human behavior exhibited in response to three social influence conditions where the non-player characters all ran, all hid, or both ran and hid. Additionally, we manipulate law enforcement dispatch time, shooter accuracy, and magazine capacity. Results show mixed agent behavior and lower dispatch times had the largest influence on casualties. The methodology demonstrates the power of empirically defining agent behavior in ABMs. pdf Military and National Security ApplicationsGenerative and Simulation-Based Learning Chair: James Starling (U.S. Military Academy)
Generative Learning for Simulation of Vehicle Faults Patrick Kuiper (Duke University, US Army); Sirui Lin and Jose Blanchet (Stanford University); and Vahid Tarokh (Duke University) Tags: neural networks, optimization, military, Python, government Abstract AbstractWe develop a novel generative model to simulate vehicle health and forecast faults, conditioned on practical operational considerations. The model, trained on data from the US Army's Predictive Logistics program, aims to support predictive maintenance. It forecasts faults far enough in advance to execute a maintenance intervention before a breakdown occurs. The model incorporates real-world factors that affect vehicle health. It also allows us to understand the vehicle's condition by analyzing operating data, and characterizing each vehicle into discrete states. Importantly, the model predicts the time to first fault with high accuracy. We compare its performance to other models and demonstrate its successful training. pdfDEVS-Based Simulation Acceleration for AI Training : Unmanned Surface Vehicle Case Juho Choi (KAIST), Jang Won Bae (Korea University of Technology and Education), and Il-Chul Moon (KAIST) Tags: DEVS, military, Python, student Abstract AbstractAs demand for USV usage increases, the development of simulators for AI training is becoming crucial. This paper introduces a DEVS-based simulation acceleration technique achieved through dynamic changes in the model structure, which is enabled by DSDEVS, while considering domain-specific characteristics. In the case study, the proposed method was applied to and evaluated using a USV models. Specifically, the proposed method adapts the coupling structure of the USV maneuver model based on changes in bank angle during simulations; the coupling structure of the USV sensor model is modified according to the distance from enemy units. These changes reduce unnecessary event exchanges during simulation execution, thus increasing the speed of simulation execution. Furthermore, they can lead to the dynamic control of time advances in USV models, enables the improvement of simulation speed. The case study shows the proposed method effectively accelerates simulation execution, but it involves a trade-off with simulation accuracy. pdfBuilding Equitable Student Project Groups - A Simulation Study to Assess Heuristic Assignment Methods Matthew Dabkowski, Stephen Gillespie, Ian Kloo, Devon Compeau, and Mai Tran (USMA) Abstract AbstractThis paper explores methods for forming equitable project groups in a class by minimizing the range in mean cumulative quality point averages (CQPAs) across the groups. Using simulation, it forms representative classes of various sizes from real-world CQPA data, and it compares the approximate solutions of seven heuristic assignment methods to the globally optimal solutions found with mixed integer linear programming. Finding a near optimal heuristic is useful as exact methods can be computationally expensive or require specialized knowledge or software. Simulation results indicate that the Alternating Convergence heuristic consistently outperforms the other six heuristics, generating near optimal solutions with low computational cost. This suggests the Alternating Convergence heuristic is a practical method for building project groups when equitability in scholastic achievement supports one’s pedagogical philosophy or objectives. pdf Military and National Security ApplicationsCombat Ready Force Simulations Chair: Gonzalo Hernando (Air Force Institute of Technology)
Capturing Soldier Fitness Levels in Combat Simulation Vikram Mittal and Paul Evangelista (USMA) Tags: agent-based, military Abstract AbstractCombat simulations typically depict individual soldiers as identical entities, neglecting differences in their ability to execute warfighting tasks. An essential aspect of a soldier's effectiveness is their physical fitness, which impacts their performance on the battlefield, including their mobility, survivability, and lethality. This paper develops two models to correlate soldier physical fitness to movement speeds and shooting accuracy. The first model, derived from literature, links soldier movement speeds and durations to their physical fitness levels, characterized by their VO2max, which is correlated with their two-mile run time. The second model used data collected from soldiers (n=60) to correlate shooting accuracy, fitness, and marksmanship scores when firing from a standing position. These models are integrated into an agent-based combat simulation to assess the impact of physical fitness on soldier survivability and lethality in urban operations. The results show that increased soldier physical fitness significantly improves soldier survivability. pdfInteractive Multi-level Virtual Tactical Simulation: The Development of an Autonomy Architecture to Improve Training Experience Edison P. de Freitas (UFRGS) and Eliakim Zacarias, Raul C. Nunes, and Luis A. L. Silva (UFSM) Tags: agent-based, military, government Abstract AbstractMulti-level simulation setups are crucial to train individuals with different learning needs in the same virtual environment. For users at the group command level in virtual tactical simulations for defense/military, tasks simulated at the unit (lower) levels are usually executed autonomously. The contribution of this paper is to detail how these autonomous agent tasks are performed in a chain of actions and commands that include the possibility of high-rank users’ interaction with the simulations. The work reports on the experience of designing and implementing an autonomy architecture for an agent-based simulation system called SIS-ASTROS. There, autonomous tasks executed by simulated artillery batteries are grounded on the exposition and controlled execution of the API of the SIS-ASTROS simulator. In a user-friendly specification, simulation analysts can provide scripts for doctrine-based agent behaviors, where an autonomy
simulator module executes these scripts to improve the overall training experience. pdf
Track Coordinator - Modeling Methodology: Rodrigo Castro (ICC-CONICET, Universidad de Buenos Aires), Andrea D'Ambrogio (University of Roma TorVergata), Gabriel Wainer (Carleton University) Modeling MethodologyEstimation and Validation Methods Chair: Pia Wilsdorf (University of Rostock)
Potential and Challenges of Assurance Cases for Simulation Validation Pia Wilsdorf (University of Rostock), Steffen Zschaler (King's College London), and Fiete Haack and Adelinde M. Uhrmacher (University of Rostock) Abstract AbstractSimulation studies require thorough validation to ensure model accuracy, reliability, and credibility. While validation typically focuses on the simulation model itself, additional artifacts also influence study outcomes. Conceptual models, comprising research questions, requirements, inputs and outputs, model content, assumptions, and simplifications, provide context information for interpreting results and assessing model suitability. Validating other simulation artifacts for their fitness-for-purpose is complex, necessitating structured arguments to increase confidence. This paper explores when and how validation arguments should be constructed throughout the modeling and simulation lifecycle. Drawing on concepts from safety assurance cases, it defines key claims for the various artifacts, discusses validation arguments from different perspectives - including process, product, people, and project - and illustrates them through a computational biology case study. We conclude with a discussion of the suitability of such structured arguments for the comprehensive validation of simulation studies. pdfMethodology for Online Estimation of Rheological Parameters in Polymer Melts Using Deep Learning and Microfluidics Juan Sandubete-López (Universidad Complutense de Madrid, Microfluidics Innovation Unit); José L. Risco-Martín (Universidad Complutense de Madrid); Alexander McMillan (Microfluidic Innovation Center); and Eva Besada-Portas (Universidad Complutense de Madrid) Abstract AbstractMicrofluidic devices are increasingly used in biological and chemical experiments due to their cost-effectiveness for rheological estimation in fluids. However, these devices often face challenges in terms of accuracy, size, and cost. This study presents a methodology, integrating deep learning, modeling and simulation to enhance the design of microfluidic systems, used to develop an innovative approach for viscosity measurement of polymer melts. We use synthetic data generated from the simulations to train a deep learning model, which then identifies rheological parameters of polymer melts from pressure drop and flow rate measurements in a microfluidic circuit, enabling online estimation of fluid properties. By improving the accuracy and flexibility of microfluidic rheological estimation, our methodology accelerates the design and testing of microfluidic devices, reducing reliance on physical prototypes, and offering significant contributions to the field. pdfContinuous Optimization for Offline Change Point Detection and Estimation Hans Reimann (University of Potsdam), Sarat Moka (University of New South Wales), and Georgy Sofronov (Macquarie University) Tags: estimation, machine learning, optimization, conceptual modeling Abstract AbstractThis work explores use of novel advances in best subset selection for regression modelling via continuous optimization for offline change point detection and estimation in univariate Gaussian data sequences. The approach exploits reformulating the normal mean multiple change point model into a regularized statistical inverse problem enforcing sparsity.
After introducing the problem statement, criteria and previous investigations via Lasso-regularization, the recently developed framework of continuous optimization for best subset selection (COMBSS) is briefly introduced and related to the problem at hand. Supervised and unsupervised perspectives are explored with the latter testing different approaches for the choice of regularization penalty parameters via the discrepancy principle and a confidence bound. The main result is an adaptation and evaluation of the COMBSS approach for offline normal mean multiple change-point detection via experimental results on simulated data for different choices of regularisation parameters. Results and future directions are discussed. pdf Modeling MethodologyParallel and Distributed Simulation Chair: Alessandro Pellegrini (University of Rome Tor Vergata)
Model-Driven Engineering for High-Performance Parallel Discrete Event Simulations on Heterogeneous Architectures Romolo Marotta and Alessandro Pellegrini (Tor Vergata University of Rome) Abstract AbstractModern high-performance, large-scale simulations require significant computational power, memory, and storage, making heterogeneous architectures an attractive option. The presence of accelerators in heterogeneous architectures makes model development hard. Domain-specific languages (DSLs) have successfully simplified model development, but designing a DSL to target heterogeneous architectures can be burdensome. Model-driven engineering (MDE) can simplify the development of DSLs targeting heterogeneous architectures. In this paper, we focus on MDE and propose a model-driven approach targeting Parallel Discrete Event Simulations on heterogeneous architectures. We exercise our MDE-generated models using a state-of-the-art runtime environment for heterogeneous architectures. pdfEfficient Parallel Simulation of Networked Synchronous Discrete-Event Systems Neha Karanjkar (Indian Institute of Technology Goa), Madhav Desai (Indian Institute of Technology Bombay), Akhil Kushe (Goa College of Engineering), and Anish Natekar (Indian Institute of Technology Goa) Tags: discrete-event, parallel, open source Abstract AbstractWe present Sitar, an open-source, general-purpose modeling framework consisting of a custom modeling language and a simulation kernel, designed for efficient parallel simulation of networked Synchronous Discrete-event Systems (SDES) in applications such as communication networks and computer systems design. A unique feature of Sitar is it's two-phase, cycle-based simulation algorithm that allows an efficient, race-free parallel execution on shared-memory systems. This is achieved by imposing a mild restriction on the set of SDES that can be described within the framework. The modeling language is designed for describing complex, networked systems with a static interconnection structure. We demonstrate Sitar’s modeling capability, performance and scalability through a detailed performance evaluation and a comparison against SystemC and SimPy. Our results show that Sitar’s single-threaded performance is better than that of SimPy and comparable to that of SystemC, whereas a multi-threaded execution shows near-linear scaling for several benchmark configurations. pdfScalable HPC Job Scheduling and Resource Management in SST Abubeker Abdurahman and Abrar Hossain (University of Toledo), Kevin A. Brown and Kazutomo Yoshii (Argonne National Laboratory), and Kishwar Ahmed (University of Toledo) Tags: discrete-event, estimation, parallel, validation Abstract AbstractEfficient job scheduling and resource management contributes towards system throughput and efficiency maximization in high-performance computing (HPC) systems. In this paper, we introduce a scalable job scheduling and resource management component within the structural simulation toolkit (SST), a cycle-accurate and parallel discrete-event simulator. Our proposed simulator includes state-of-the-art job scheduling algorithms and resource management techniques. Additionally, it introduces a workflow management components that supports the simulation of task dependencies and resource allocations, crucial for workflows typical in scientific computing and data-intensive applications. We present validation and scalability results of our job scheduling simulator. Simulation shows that our simulator achieves good accuracy in various metrics (e.g., job wait times, number of nodes usage) and also achieves good parallel performance. pdf Modeling MethodologyParatemporal Simulation Chair: Gabriel Wainer (Carleton University)
Advanced Tutorial On Paratemporal Simulation Using Tree Expansion Bernard Zeigler (University of Arizona, RTSync Corp.); Christian Koertje (University of Massachusetts); and Cole Zanni, Sangwon Yoon, and Gerardo Dutan (SUNY Binghamton) Tags: DEVS, Monte Carlo, output analysis, complex systems, tutorial Abstract AbstractStochastic simulations require large amounts of time to generate enough trajectories to attain statistical significance and estimate desired performance indices with satisfactory accuracy. They require search spaces with deep uncertainty arising from inadequate or incomplete information about the system and the outcomes of interest. Paratemporal methods efficiently explore these large search spaces and offer an avenue for speedup when executed in parallel. However, combinatorial explosion of branching arising from multiple choice points presents a major hurdle that must be overcome to implement such techniques. In this advanced tutorial we show how to tackle this scalability problem by applying a systems theory-based framework covering both conventional and newly developed paratemporal tree expansion algorithms for speeding up discrete event system stochastic simulations while preserving the desired accuracy.
. pdf Modeling MethodologyDEVS Chair: Gabriel Wainer (Carleton University)
DEVS as a Method to Model and Simulate Combinatorial Double Auctions for E-Procurement Juan de Antón (INSISOC - University of Valladolid), Cristina Ruiz-Martin (Carleton University), and Félix Villafáñez and David Poza (INSISOC - University of Valladolid) Tags: DEVS, verification, manufacturing, C++, VBA Abstract AbstractThe surge in electronic procurement is fostering the proliferation of electronic marketplaces and advanced auctions as primary coordination mechanisms. Among these, combinatorial and double auctions are gaining traction in the procurement sector. However, prevalent implementations often assume participants to be perfectly rational, adhering to predefined behaviors within the auction model. These centralized models, while prevalent, fail to capture the intricate dynamics of real auction environments adequately. Consequently, there is a growing recognition of the necessity for decentralized models within an agent-based framework to simulate such auctions authentically. The contribution of this work is the application of the DEVS formalism to develop a decentralized model for a combinatorial iterative double auction to address the limitations of centralized implementations. The model is formally defined, and a case study is presented to verify it against its centralized version. This is the first step toward accommodating agents with varied behavioral patterns within auction simulations. pdfGenerating TCN Models from Parallel DEVS Models: Semiconductor Manufacturing Systems Vamsi Krishna Pendyala (Ariz), Hessam S. Sarjoughian (Arizona State University), and Edward J. Yellig (Intel Corporation) Tags: DEVS, machine learning, semiconductor, wafer fab, student Abstract AbstractMachine learning models have the potential to augment the simulation of discrete-event dynamical systems, which is of considerable interest. Such models should serve the central purpose of capturing the temporal dynamics of event-based systems. In this paper, we use simulations of Parallel DEVS (PDEVS) models of a benchmark semiconductor fabrication manufacturing system to generate ARIMA, RNN, LSTM, and TCN models of the same. Single/multi-stage manufacturing statistical and deep learning models are developed and evaluated for different experimental scenarios. We generate Temporal Convolutional Neural (TCN) network models and evaluate their uni/multivariate throughput and turnaround time series by varying wafer lot configurations and sizes. The results show the predicted time series generated by TCN models can approximate the accuracies of simulated PDEVS models while achieving many-fold execution speedup. pdfHandling Asynchronous Inputs in DEVS Based Real-Time Kernels Sasisekhar Govind and Gabriel Wainer (Carleton University) Tags: DEVS, distributed, sampling, cyber-physical systems, C++ Abstract AbstractReal-time systems are complex to design and implement. Various modelling and simulation techniques are employed to make this task more structured and efficient. However, there is often a disconnect between modeling for simulation and development for deployment. In this paper we discuss a technique to bridge this gap between simulation and deployment, specifically dealing with a framework to handle asynchronous inputs into a system developed using the Discrete Event System Specification. Further, this paper presents a case study that demonstrates the effectiveness of the framework, and the congruence between simulation and deployment of a real-time system is determined. pdf Modeling MethodologyModeling Methods Chair: Hessam Sarjoughian (Arizona State University)
Constructing Hierarchical Modular Models in Alternative and Interchangeable Representations Hessam Sarjoughian (Arizona State University) and Sheetal Mohite (Amazon) Abstract AbstractModelers should be empowered to develop component-based models incrementally and iteratively using different modalities. They should not be required to work with an entire model hierarchy or be constrained to pre-defined sub-hierarchy levels that can impose limitations on the visual development of models. This paper introduces a novel visual modeling capability where modelers can work with any sub-hierarchy with any finite number of arbitrary levels and branches. The structural representation of the components with their input/output relationships conforms to the system-theoretic DEVS models and broadly Reactive Modules. Additionally, it is advantageous for the visual and logical aspects of models to be stored and retrieved in databases. Visual models can be generated from their logical counterparts stored in the NoSQL database. A framework supporting a unified logical, visual, and persistent modeling is developed. The framework aids modelers in constructing structural models at scale for simulation and Digital Twins. pdfOn Guiding Simulation Model Reuse from the Conceptual Modeling Stage Xiaoting Song, Maurizio Tomasella, Lazuardi Almuzaki, and Jamal Ouenniche (University of Edinburgh Business School) and Silvia Padron (TBS Business School) Tags: conceptual modeling, aviation, hybrid, AnyLogic, student Abstract AbstractThis study proposes a five-stage decision-making process to streamline the reuse of simulation models from as early as the conceptual modeling stage of a new study. The stages assist modelers in selecting pre-existing conceptual and/or formal models, testing their suitability for reuse, selecting model components deemed reusable, adapting them to the new modeling requirements, integrating them into a `final' conceptual model and, prior to its computer coding, carrying out various steps of conceptual model validation along the way. Key novelties are: a structured approach to model reuse, an emphasis on validation steps, and the integration of practical tools. We discuss the advantages from following such a process based on evidence from a recent project that looked at enhancing aircraft turnaround processes. pdf Simulation Around the World, Modeling MethodologyDigital Twins Chair: John Shortle (George Mason University)
Digital Twins for Picking Processes - Cases Developed by a Brazilian Consulting Company Leonardo Chwif and Wilson Pereira (Simulate), Bruno Santini (Luxottica), and Felipe Tomazin (ZF) Abstract AbstractThe “picking” process is a relatively common logistics process in distribution centers: according to the picking list, products are separated and packed in specific containers (usually boxes) where customer orders are consolidated. The vast majority of picking processes involve intensive use of labor. Due to the complexity of this process, simulation is an extremely suited tool for its correct performance evaluation. This article will depict some applications of digital twin simulations for picking processes in practical cases for daily picking operation evaluations. pdfPort Management Digital Twin and Control Tower Integration: an Approach to Support Real-time Decision Making Alice Oliveira Fernandes (UNICAMP, Belge Smart Supply Chain) and Daniel Gutierres, Marcelo Fugihara, and Bruno de Norman (Belge Smart Supply Chain) Abstract AbstractDiscrete event simulation plays a pivotal role in facilitating decision-making within logistics, necessitating real-time initiation based on the current state of the system. The architecture outlined in this article integrates a real-time Digital Twin with simulation logic and a Control Tower into a cohesive model, thereby reducing offline efforts and runtime. This paper is to presents a groundbreaking project in a Brazilian port. The holistic approach of it offers a comprehensive overview of port operations and enables predictive insights for up to 72 hours in advance. Beyond enhancing operational efficiency, it promotes proactive decision-making and adaptive resource allocation, marking a paradigm shift in port management. The integration of real-time feedback and dynamic optimization algorithms can further enhance the responsiveness and adaptability of the system to changing operational conditions. Future development lies in enhancing the predictive analytics capabilities of the model by leveraging machine learning algorithms and advanced analytics techniques. pdfA High Extensible Modeling Method using Three-layer Component-based Architecture Yuan Haozhe, Yao Yiping, Tang Wenjie, and Zhu Feng (National University of Defense Technology) Tags: composability, complex systems, student Abstract AbstractExtensibility and reusability are important yet competing objectives in the modeling process. Despite the progress made by current modeling methodologies, they tend to be limited by either one-way control transfers or static data linkages. Considering these limitations, we introduce a novel three-layer, progressive, composable modeling architecture that divides the system model into three layers: components, entities, and systems. It incorporates behavior trees for the assembly of functional components into entities and adopts a publish-subscribe communication paradigm to dynamically establish interactions among entities. Case studies confirmed that this approach facilitates the efficient development of simulation models. Moreover, it allows for the agile adaptation of entity behavior models and their interconnections, ensuring robust extensibility and optimizing the reuse of models as simulation requirements evolve. pdf
Project Management and Construction Track Coordinator - Project Management and Construction: Eric Jing Du (University of Florida), Joseph Louis (Oregon State University) Project Management and ConstructionData-Driven Approaches to Infrastructure Management Chair: Abdolmajid Erfani (Michigan Tech University)
Uncovering Socioeconomic Features In Pavement Conditions Through Data Mining: A Two-Step Clustering Model Tamim Adnan and Abdolmajid Erfani (Michigan Tech University) Tags: big data, data driven, construction, project management, Python Abstract AbstractAcross the United States, individuals regardless of their socioeconomic backgrounds deserve equitable access to high-quality roads and highways. This research delves into the use of data mining methods to examine access quality, focusing on pavement condition through the International Roughness Index (IRI) and socioeconomic factors, by exploring the Highway Performance Monitoring System (HPMS) dataset. Data mining serves as an exploratory process, unveiling and visualizing valuable yet not immediately evident insights within extensive datasets. Through data mining with two-step clusters, k-means, and hierarchical agglomerative clustering, we examined over 8 million records from HPMS and U.S. census data over four years. Our findings highlight the impact of socioeconomic elements—such as urbanization, income, and demographic composition—on pavement quality, beyond traffic, weather, and technical specifications. These insights emphasize the need for incorporating social equity into pavement maintenance and budgeting strategies, underscoring the significant role socioeconomic factors play in pavement performance. pdfSimulating Federated Learning with Data Augmentation for Culvert Condition Prediction in Utah: a Case Study Pouria Mohammadi and Abbas Rashidi (University of Utah) and Sadegh Asgari (Merrimack College) Tags: machine learning, project management, Python, student Abstract AbstractTransportation agencies are increasingly adopting cutting-edge solutions to enhance infrastructure management strategies. Infrastructure condition prediction models, particularly important for optimizing inspection resources, are at the forefront of this trend. Machine learning (ML) algorithms play a crucial role in these models, but traditional centralized ML models often struggle with data scarcity, privacy, and transferability. This paper introduces a decentralized approach using Federated Learning (FL) for infrastructure condition prediction, simulating its effectiveness with culvert inventories in Utah and five other states in the United States. We analyzed two FL models—one with data augmentation and one without—against a traditional centralized model. Our results demonstrated that FL-based models improved prediction accuracy by 30%, ensured data privacy, and reduced data transmission overheads. Moreover, due to its limited data, Utah benefited from federated insights, illustrating FL's potential to effectively enhance infrastructure management. The simulation highlights the advantages of utilizing FL in real-world scenarios. pdfAdaptive Simulation of EV Charging Processes: Employing Bayesian Inference with Markov Chain Monte Carlo for Dynamic Input Updating Yuzheng Xie, Poojitha Naidu Bandreddy, Mohammed Fawzi M. Zaylaee, and Wenying Ji (George Mason University) Tags: data driven, input modeling, Monte Carlo, behavior, Python Abstract AbstractSimulating the charging process of electric vehicles (EVs) at public stations is crucial for effective decision-making in the planning and management of EV infrastructure. Traditional models face challenges in reflecting the dynamic and uncertain nature of real-life EV charging. This study introduces a hybrid simulation framework that incorporates geospatial demand and utilizes Bayesian Inference with the Markov Chain Monte Carlo (MCMC) method to generate dynamic, probabilistic inputs. The proposed approach could (1) dynamically respond to changing observation data, (2) reflect the uncertainty and randomness of charging progress, and (3) integrate users’ demand and geospatial factors in the charging station selection. A case study was conducted involving three charging stations in Fairfax City. The results explained the evolving charging patterns and evaluated the impact of unforeseen events on station utilization. This method offers a robust tool for planning, developing, and optimizing public EV charging infrastructure, adapting to changing behaviors and demands. pdf Project Management and ConstructionEnhancing Learning and Collaboration in Engineering Chair: Susan R. Hunter (Purdue University)
How Do Different Minds Shape Performance in Construction? An Agent-Based Modelling Approach Lynn Shehab and Farook Hamzeh (University of Alberta) Tags: agent-based, construction, AnyLogic, student Abstract AbstractIn the dynamic construction industry, project performance crucially depends on the cognitive diversity of team members. The varied cognitive abilities across teams significantly influence their adaptability and efficiency, driving the success of construction projects in unpredictable and dynamics environments. This paper employs an Agent-Based Modelling approach to explore how variations in cognitive abilities, improvisation, collaboration, and physiology-related, affect project outcomes. By simulating real-world construction scenarios, the model examines the effects of cognitive trait diversity on decision-making and team dynamics. This study aims to uncover the complex relationship between cognitive diversity and project performance, highlighting its impact on the efficacy of solutions and collaborative efforts. The findings provide valuable insights into optimizing team composition and enhancing decision-making processes in construction projects. Ultimately, this research advances our understanding of how cognitive factors influence project success, offering strategies to foster more resilient and effective construction teams. pdf Project Management and ConstructionSimulation for Construction Site Logistics Chair: Kamyab Aghajamali (University of New Brunswick, University of Alberta)
Optimizing the Automation in Construction Site Logistics: Problems and Propsed Modell Library for Materials Flow Simulation Alexander Schlosser and Martin Barth (Friedrich-Alexander-Universität Erlangen Nürnberg), Peter Schuderer (Technische Hochschule Ingolstadt), and Joerg Franke (Friedrich-Alexander-Universität Erlangen Nürnberg) Tags: discrete-event, construction, logistics, supply chain, Siemens Tecnomatix Plant Simulation Abstract AbstractThe construction industry is currently facing challenges such as skilled labor shortages, limited automation, and inadequate digitalization, particularly for small and medium-sized enterprises (SME), amidst rising interest rates and decreasing subsidies for builders. To address these challenges, a simulation model library for construction is being established. The Plant Simulation System is used to map construction site systems with various model components, enabling digital simulation of the environment and supply chains. Customized simulation parameters are used to simulate specific construction requirements. This paper describes the use of masonry robots and transportation in wall and floor construction and their impact on site performance. The simulation model library REMUS is important for digitalizing construction sites, allowing for the assessment of innovative systems before construction and estimation of investment returns. Simulations can be used to evaluate investments and future concepts. pdfEnhancing Safety And Efficiency In Crane Operations: Addressing Communication Challenges And Blind Lifts Kamyab Aghajamali (University of New Brunswick, University of Alberta); Rafik Lemouchi (University of Alberta, University of New Brunswick); Alireza Rahimi and Saeid Metvaei (University of New Brunswick, University of Alberta); Ahmed Bouferguene (University of Alberta); and Zhen Lei (University of New Brunswick) Tags: data driven, optimization, construction, Python, student Abstract AbstractThis study presents a methodology to improve safety and efficiency in crane operations, particularly addressing challenges in modular construction and complex lifts. It focuses on enhancing communication between crane operators and signalmen, especially in congested environments. Through a literature review, it highlights the need for innovative solutions in signalman location planning and communication methods. The proposed methodology involves preplanning and optimizing signalman visibility using a grid system to determine suitable standing locations. By assessing visibility in relation to lifting module corners, the study aims to ensure clear lines of sight for safe lifting. Validated through two case studies, the methodology demonstrates effectiveness in planning signalman locations for various lifting scenarios. This research offers a systematic approach to tackle crane operation challenges, improving safety, efficiency, and precision in construction projects. Further development and implementation of such methodologies are vital for the success of modular construction and heavy industrial projects. pdf Project Management and ConstructionSimulation for Project Planning and Controls Chair: He Wen (University of Alberta)
A Framework of Project Risk Simulation with Event Knowledge Graph He Wen, Simaan AbouRizk, and Yasser Mohamed (University of Alberta) and Rongbing Huang (York University) Tags: data driven, discrete-event, construction, project management, Python Abstract AbstractRisk simulation is crucial for effective project management, yet conventional methods often fail to capture the complex interdependencies and interactions among risk events. This paper proposes a novel approach to project risk simulation by integrating event knowledge graphs, fuzzy logic techniques, and game theory principles. Event knowledge graphs provide a structured representation of project events and their relationships thereby facilitating the simulation of risk events and pathways; fuzzy logic enables the assessment of uncertain events; and game theory aids in identifying high-risk events and elucidating risk pathways. A methodology is outlined encompassing the construction of event repositories, establishment of event knowledge graphs, and simulation of project risks. Following, a case study of wind farm projects demonstrates the practical application of the proposed approach, highlighting its effectiveness in simulating and analyzing project risks. pdfA Federated Simulation-Based Framework for Enhanced Construction Project Planning and Control Rana Ead, Stephen Hague, Yasser Mohamed, and Simaan AbouRizk (University of Alberta) Tags: distributed, conceptual modeling, construction, project management, student Abstract AbstractThis study addresses the need for integrated planning and control in construction projects by proposing a novel federated simulation framework. Despite the proven effectiveness of simulation in project planning and control, its adoption is hindered by complexity and limited reusability. High-Level Architecture (HLA) standards were explored to mitigate these challenges, yet they also pose complexities of their own. The proposed framework integrates simulation and HLA advantages within a single environment, offering a standard object model for mapping data, a decomposable simulation component that supports plug-and-play flexibility, and a data acquisition and integration component that supports real-time dynamic updates and scenario analysis. A prototype federation demonstrates initial functionality, showcasing the framework's potential to streamline construction processes and enhance decision-making. By leveraging simulation and HLA synergies, our framework represents a promising approach to achieving integrated construction planning and control throughout project lifecycles. pdf
Reliability Modeling and Simulation Track Coordinator - Reliability Modeling and Simulation: Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark), Xueping Li (University of Tennessee), Femi Omitaomu (Oak Ridge National Laboratory) Reliability Modeling and SimulationDecision-Making in Reliability Management Chair: Michelle Jungmann (Karlsruhe Institute of Technology)
Closing the Service: Contrasting activity-based and time-based systematic closure policies Antonio Castellanos (University of Chicago), Andrew Daw (University of Southern California), Amy Ward (University of Chicago), and Galit B. Yom-Tov (Technion – Israel Institute of Technology) Tags: data driven, discrete-event, Monte Carlo, optimization, behavior Abstract AbstractWe examine different policies for systematic service closure in messaging service systems. The system is modeled as an M/UHP/1 queue, where service times follow a history-based Hawkes cluster process. We propose and examine stopping-time rules that balance between queue length and the probability of prematurely closing conversations. In a simulation study, we compare two families of systematic closure policies: the first relies on predictive information regarding service progress, i.e., the conversation's activity levels, while the second relies on elapsed time without activity. When restricted to static threshold policies, both families provide similar performance. However, when allowing the threshold to vary with the system state, activity-level policies outperform the inactive-time policies. Moreover, a large difference is observed between static and dynamic threshold policies. We therefore conclude that state-dependent (i.e., dynamic) activity-based policy is the most promising candidate to achieve optimal closure rules. pdfInferring reliability model parameters from expert opinion James Nutaro (Oak Ridge National Laboratory) Tags: distribution-fitting, estimation, government Abstract AbstractWe propose a method for constructing bathtub models of reliability from opinion. The method is intended for reliability studies early in the design and prototyping of a new system, before data concerning reliability has become available. A stylized bathtub curve is presented for soliciting best engineering judgement from technical experts. By pooling these stylized curves, we produce data that can be used to infer parameters for a piece-wise Weibull model of reliability. A numerical example demonstrates the practicality of the method while also highlighting potential pitfalls when working with subjective data. pdfFusing Expert Knowledge and Data for Simulation Model Discovery in Digital Twins: A Case Study from Reliability Modeling Michelle Jungmann (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Tags: discrete-event, Petri Nets, process mining, digital twin, Python Abstract AbstractIntegrating expert knowledge in data-driven Digital Twins can lead to better-informed underlying models. Achieving systematic integration, however, remains a complex challenge. In this study, we propose an initial approach for hybrid model extraction by systematically fusing expert knowledge statements with Internet of Things data from manufacturing systems, such as event and state logs. We outline two main strategies to facilitate the fusion of data and expert knowledge in a systematic way. We, furthermore, present a case study in reliability assessment of manufacturing systems showcasing our methodology within this specific domain. Using our four fusion algorithms, we automatically extract reliability models from both data and expert knowledge statements. Finally, we conduct a comprehensive analysis of the results and draw conclusions regarding the efficacy of the fusion algorithm for Digital Twin model extractions. pdf Simulation Around the World, Reliability Modeling and SimulationApplications Chair: Antuela Tako (Nottingham Trent University)
Modeling and Simulation of Battery Recharging for UAV Applications: Smart Farming, Disaster Recovery, and Dengue Focus Detections Leonardo Grando and Juan Fernando Galindo Jaramillo (Campinas State University); Jose Roberto Emiliano Leite (Campinas State University, Fundação Hermínio Ometto); and Edson Luiz Ursini (Campinas State University) Abstract AbstractThe applications of Unnamed Aerial Vehicles (UAVs) or Drones have been increasing in areas such as Smart Farming, Disaster Recovery, and combat of tropical mosquito diseases such as Dengue. Due to the short duration of the electrical battery capacity, at most 20 to 30 minutes in some cases, most UAVs have low battery capacity to carry out missions. This work presents two contributions: i) a description of the characteristics observed in three drone applications (agricultural, disaster, and against dengue disease), and ii) the creation of an Agent-Based Simulation Model considering energy supply simulation. This model considers that the agents will not collude about their recharging decisions. pdfA Discrete-event Simulation Model for Terminal Capacity Planning in an Indonesian Container Port Alicia Bernadine Pandy and Niniet Indah Arvitrida (Institut Teknologi Sepuluh Nopember) Abstract AbstractThis study examines a case study of a container terminal in Surabaya, Indonesia. There are three types of equipment, which are container cranes, RTG cranes, and head trucks. Despite the availability of these resources, their utilization is still low. To address this issue, given the system’s inherent complexity and the interdependencies among its components, a discrete-event simulation modeling approach is used. The simulation model explores the potential scenarios for improvements and evaluate the best alternatives to improve the equipment utilization while taking into account the quantity of ships and containers processed. The experiments also consider the combination of equipment assignments during the loading-unloading activities. The results indicate that the increase in equipment utility is not strictly proportional to the increase in the number of containers and ships. In particular, increases in the number of containers and ships do not result in a proportional, linear correlation with equipment utilization rates. pdfUnderstanding Energy Consumption Trends in High Performance Computing Nodes Jonathan Muraña (UDELAR, Uruguay); Juan J. Durillo (Leibniz-Rechenzentrum,); and Sergio Nesmachnow (UDELAR, Uruguay) Tags: estimation, environment Abstract AbstractThis article presents a study of energy consumption behavior in high performance computing nodes in
relation to the usage of computing resources. Linear models are constructed for identifying common
patterns or differences in energy consumption across different architectures. The study is significant as it
provides insights into the energy consumption of computing nodes, helping to build simple, yet useful and
transparent models. Moreover, the methodology provides building blocks for building new models with
high predictive quality and broad applicability across different architectures. The results reveal similarities
in energy consumption across different architectures when compared in terms of CPU cycles and cache
misses. Additionally, employing linear models based on CPU cycles and cache misses allows for both the
explanation of energy consumption behavior and the achievement a reasonable predictive quality. Overall,
a partition-based linear model outperforms a global linear and a mean partition-based model by up to 5.7% pdf
Scientific AI & Applications Scientific AI & ApplicationsScientific Applications Chair: Rafael Mayo-García (CIEMAT)
Using Simulation to Address Inequality and Variability in Elections Brock Spence, Alexander Michel Cañedo, and Dima Nazzal (Georgia Institute of Technology) Tags: optimization, complex systems, Simio, student Abstract AbstractIn 2022, the State of Georgia experienced significant voter wait-time issues throughout the election season. Since then, Georgia has been front and center in the national conversation about election integrity, as legislators try to adapt voting systems to ever-changing technologies and cultural norms. Georgia law generally allocates election resources based on the number of registered voters a precinct serves, but size is only one differentiating factor among precincts. Voters across the state behave differently, so different precincts of similar sizes might require different resources depending on when voters arrive. Using simulation optimization based on voter arrival data obtained from the Georgia Secretary of State, we have found more efficient and equitable allocation methods, compared to simply considering registered voters, including simple heuristics that account for common voter arrival patterns. By considering the variation between precincts, election administrators can better safeguard the integrity and equality of elections without sacrificing efficiency. pdfSmart Management of Dairy Farms Based on Simulation Osvaldo Palma (Universidad Nacional Andrés Bello, Universidad de Lleida); Lluis Plà (Universidad de Lleida); Alejandro Mac Cawley (Pontificia Universidad Católica de Chile); and Víctor Albornoz (Universidad Técnica Federico Santa María) Tags: discrete-event, Monte Carlo, optimization, random variate generation, student Abstract AbstractMilk and beef production is crucial to ensuring a cost-effective and sustainable food supply. As the demand for agricultural products increases, making informed decisions is critical. Discrete event simulation, a suitable tool for modeling complex systems with random variability, offers substantial advantages compared to traditional analytical models. In this study, we developed a discrete event simulation model for a dairy herd. The model can be useful for studying various culling strategies based on disease or reproductive performance. We validated the model by comparing it with analytical results from a Markov chain model and published literature. Our findings demonstrate that twin calving does not significantly affect herd performance, contrary to what some authors claimed. We advocate the use of discrete event simulation in smart management tools, emphasizing its usefulness in decision-making on dairy farms. In future research, we will explore additional factors such as aborts, mortality, and variable time passages. pdfAdvancing Neutron Safety and Dosimetry in Nuclear Facilities: Applications and Current Status of the Development of NEREIDA Osiris Núñez-Chongo (CIEMAT); Mauricio Suárez-Durán (Universidad de la Costa); Hernán Asorey (CNEA); Iván Sidelnik (CNEA, Centro Atómico Bariloche); Rafael Mayo-García and Roberto Méndez (CIEMAT); and Manuel Carretero (Universidad Carlos III de Madrid) Abstract AbstractThe development of nuclear facilities necessitates reliable tools for design, licensing, and safety assessments. NEREIDA (the Spanish acronym for fast neutrons for the exploitation of facilities with atomic devices) is being developed for providing a robust solution. Utilizing Geant4-based Monte Carlo methods, NEREIDA characterizes neutron flux spectra and calculates dosimetric quantities such as environmental and personal dose equivalents in any neutron-production facility. This innovative tool integrates state-of-the-art codes for neutron transport, interaction, moderation, and material activation. Our preliminary results, obtained after the first year of a three-year planned period, demonstrate computational scalability across various computing facilities. NEREIDA will generate Geant4 facility's models from CAD files, structural materials, and neutron source characteristics without requiring programming expertise. Additionally, NEREIDA models the impact of the cosmic rays background and includes anthropomorphic phantoms for safety assessments. This work discusses NEREIDA's current status and future directions, emphasizing its critical role in enhancing nuclear facility safety. pdf Scientific AI & ApplicationsSimulation Techniques for Science and Engineering Chair: Rafael Mayo-García (CIEMAT)
Analysis of the SVD Scaling on Large Sparse Matrices María de Castro-Sánchez (Aerospace Engineering School), José A. Moríñigo (CIEMAT), Filippo Terragni (UC3M), and Rafael Mayo-García (CIEMAT) Abstract AbstractThere has been great interest in the Singular Value Decomposition (SVD) algorithm over the last years because of its wide applicability in multiple fields of science and engineering, both standalone and as part of other computing methods. The advent of the exascale era with massively parallel computers brings incredible possibilities to deal with very large amounts of data, often stored in a matrix. These advances set the focus on developing better scaling parallel algorithms: e.g., an improved SVD to efficiently factorize a matrix. This study assesses the strong scaling of four SVDs of the SLEPc library, plugged into the PETSc framework to extend its capabilities, via a performance analysis on a population of sparse matrices with up to 10^9 degrees of freedom. Among them, there is a randomized SVD with promising performance at scale, a key aspect in solvers for exascale simulations since communication must be minimized for scalability success. pdfBroadening Access to Simulations for End-users via Large Language Models: Challenges and Opportunities Philippe Giabbanelli and Jose J. Padilla (Old Dominion University) and Ameeta Agrawal (Portland State University) Abstract AbstractLarge Language Models (LLMs) are becoming ubiquitous to create intelligent virtual assistants that assist users in interacting with a system, as exemplified in marketing. Although LLMs have been discussed in Modeling & Simulation (M&S), the community has focused on generating code or explaining results. We examine the possibility of using LLMs to broaden access to simulations, by enabling non-simulation end-users to ask what-if questions in everyday language. Specifically, we discuss the opportunities and challenges in designing such an end-to-end system, divided into three broad phases. First, assuming the general case in which several simulation models are available, textual queries are mapped to the most relevant model. Second, if a mapping cannot be found, the query can be automatically reformulated and clarifying questions can be generated. Finally, simulation results are produced and contextualized for decision-making. Our vision for such system articulates long-term research opportunities spanning M&S, LLMs, information retrieval, and ethics. pdf
Simulation and Artificial Intelligence Track Coordinator - Simulation and Artificial Intelligence: Warren Lamar Harrell (MITRE Corp.), Edward Y. Hua (MITRE Corporation), Yijie Peng (Peking University) Simulation and Artificial IntelligenceEstimation Chair: Jong Hun Woo (Korea Maritime & Ocean University)
A New Approach to Sensitivity Analysis Based on Dirac Delta Family Methods Zhenyu Cui (Stevens Institute of Technology), Kailin Ding (Nanjing University of Science and Technology), Yanchu Liu (Sun Yat-sen University), and Lingjiong Zhu (Florida State University) Tags: estimation, Monte Carlo, variance reduction Abstract AbstractIn this paper, we propose a new approach to sensitivity analysis by utilizing the Dirac Delta family method. In a novel way, we combine it with the classical infinitesimal perturbation analysis (IPA) estimator, and propose a new class of Dirac-Delta based sensitivity estimators, which we name as the Delta-Family IPA estimators. We establish an explicitly computable error bound for the Delta-Family IPA estimators, which bypasses the usual technical assumption of interchangeability of limit and differentiation as in the literature of IPA stochastic derivatives estimators. Numerical examples of Greeks computations in the case of European call options and Asian digital options illustrate the improved efficiency of the proposed method as compared to the IPA method. pdfA Deep Learning Approach for Rare Event Simulation in Diffusion Processes Henrik Hult (KTH Sweden), Aastha Jain and Sandeep Juneja (Ashoka University), Pierre Nyquist (Chalmers University of Technology), and Sushant Vijayan (TIFR) Tags: machine learning, Monte Carlo, variance reduction, rare events, Python Abstract AbstractWe address the challenge of estimating rare events associated with stochastic differential equations using importance sampling. The importance sampling zero variance measure in these settings can be inferred from a solution to the Hamilton-Jacobi-Bellman partial differential equation (HJB-PDE) associated with a value function for the underlying process. Guided by this equation, we use a neural network to learn the zero variance change of measure. To improve performance of our estimation, we pursue two new ideas. First, we adopt a loss function that combines three objectives which collectively contribute to improving the performance of our estimator. Second, we embed our rare event problem into a sequence of problems with increasing rarity. We find that a well chosen schedule of rarity increase substantially speeds up rare event simulation. Our approach is illustrated on Brownian motion, Orstein-Uhlenbeck (OU) process, CIR process as well as Langevin double-well diffusion. pdfGANCQR: Estimating Prediction Intervals for Individual Treatment Effects with GANs Jiaxing Wang and Hong Wan (North Carolina State University) and Xi Chen (Virginia Tech) Abstract AbstractEvaluating individual treatment effects (ITE) is challenging due to the lack of access to counterfactual outcomes, particularly when working with biased data. Recent efforts have focused on leveraging the generative capabilities of models like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) for ITE estimation. However, few approaches effectively address the need for uncertainty quantification in these estimates. In this work, we introduce GANCQR, a GAN-based conformal prediction method that generates prediction intervals for ITE with reliable coverage. Numerical experiments on synthetic and semi-synthetic datasets demonstrate GANCQR’s superiority in handling selection bias compared to state-of-the-art methods. pdf Simulation and Artificial IntelligenceDeep Reinforcement Learning Chair: Bulent Soykan (University of Central Florida)
Deep Reinforcement Learning for Setup and Tardiness Minimization in Parallel Machine Scheduling Sohyun Nam, Jiwon Baek, and Young-In Cho (Seoul National University) and Jong Hun Woo (Seoul National University, Research Institute of Marine Systems Engineering) Tags: neural networks, optimization, digital twin, manufacturing, Python Abstract AbstractThis paper introduces a novel deep reinforcement learning algorithm for the identical parallel machine scheduling problem, with a focus on minimizing setup time and tardiness. In modern manufacturing environments, accommodating diverse consumer demands has emerged as a primary challenge, shifting focus towards small-batch, multi-product production schemes. However, traditional optimization techniques pose challenges due to the inherent complexity of production environments and uncertainties. Recently, reinforcement learning has emerged as a promising alternative for scheduling in such dynamic environments. We formulate the scheduling problem as a Markov decision process, with states designed to capture the key performance indicators related to setup and tardiness. The algorithm's actions correspond to selecting one heuristic rules among SSPT, ATCS, MDD, or COVERT. During the learning phase, we employ the Proximal Policy Optimization (PPO) algorithm. Experimental results on a custom dataset demonstrate superior performance compared to individual applications of existing heuristics. pdfDistortion Risk Measure-based Deep Reinforcement Learning Jinyang Jiang (Peking University), Bernd Heidergott (Vrije Universiteit Amsterdam), Jiaqiao Hu (Stony Brook University), and Yijie Peng (Peking University) Abstract AbstractMainstream reinforcement learning (RL) typically focuses on maximizing expected cumulative rewards. In this paper, we explore a risk-sensitive RL setting where the objective is to optimize the distortion risk measure (DRM), a criterion better reflecting human risk perception. We parameterize the action selection policy by neural networks and propose a novel policy gradient algorithm, DRM-based Policy Optimization (DPO), along with its accelerated variant, DRM-based Proximal Policy Optimization (DPPO), to address deep RL tasks with DRM objectives. DPO integrates three coupled recursions operating at different timescales to estimate gradient components and update parameters simultaneously. Our experiments provide numerical results across diverse scenarios, demonstrating that our proposed algorithms outperform the existing baselines under the DRM criterion. pdfOptimizing Job Shop Scheduling Problem Through Deep Reinforcement Learning and Discrete Event Simulation Bulent Soykan and Ghaith Rabadi (University of Central Florida) Tags: discrete-event, machine learning, neural networks, digital twin, Python Abstract AbstractThis paper explores the optimization of Job Shop Scheduling Problem (JSSP), by employing a Deep Reinforcement Learning (DRL) approach that learns optimal scheduling policies through continuous interaction with a job shop setting simulated within a discrete event simulation (DES) environment. The study involves computational experiments to train and evaluate the DRL agent across benchmark JSSP instances. We compare our DRL-based scheduling solutions against generated by widely used priority dispatching rules, assessing the impact of the learned policies on performance metrics including the total time to complete all jobs (makespan) and efficiency in using machines (machine utilization). Our results indicate improvements in scheduling efficiency, showcasing the DRL algorithm's ability to adapt and optimize in complex scheduling scenarios. This paper underscores the potential of integrating DRL with DES to create a powerful toolset for modern manufacturing challenges, enabling businesses to maintain competitive advantage through improved operational agility. pdf Simulation and Artificial IntelligenceAI Application Chair: Jan Dünnweber (OTH Regensburg)
AI-driven Multi-objective UAV Route Optimization Sahil Delsare (Northeastern University, Walmart); Ashwin Devanga (Northeastern University, MathWorks); and Dehghani Mohammad (Northeastern University) Abstract AbstractUnmanned Aerial Vehicles have proven to enhance customer service and increase efficiency in supply chain management. They offer greater flexibility, ease of operation, and bypass traffic congestions by flying directly between nodes. This paper presents an innovative version of the Team Orienteering Problem with Time Windows and Charging Stations. The proposed model integrates various optimization approaches, including heuristics and AI-driven methods. The primary objectives are to maximize service rewards, minimize total travel distance, and mitigate out-of-charge incidents. Experiments are conducted to demonstrate the competency of the applied AI-enabled approach in various scenarios. pdfOptimizing Smart Retail by Experiment Using an Online AI Model Exploration Interface Wenfei Huang, Matthias Melzer, and Jan Dünnweber (OTH Regensburg) Tags: big data, machine learning, logistics, Python Abstract AbstractSmart retail technologies save grocery store operators a lot of work. At the same time, these technologies produce valuable data for building sustainable and economical inventory management strategies. AI models can be trained for sales forecasting using the data. The forecasts support the provisioning of fresh food over the whole week and help reducing food waste. In this paper, we present a Web portal which we developed to allow grocery store operators experiments with AI models revealing interrelations between observed and anticipated customer behavior. Clickable diagrams facilitate the exploration of data sets combining historical data and synthetically generated data. Pricing and ordering can be adapted accordingly to the simulated forecasts. By means of a case study, we show that our simulations are not only useful for predicting future sales but for other smart retail tasks as well. pdfModeling of Agent Decisions using Conditional Generative Adversarial Networks Martin Bicher, Dominik Brunmeir, and Niki Popper (Technische Universität Wien) Tags: agent-based, metamodeling, neural networks, COVID, Java Abstract AbstractIn this paper, we investigate the use of Generative Adversarial Networks (GAN) to model agent behavior in agent-based models. We hereby focus on use cases in which an agent's decision-making process may only be modeled from data, but it is infeasible to be modeled causally. In these situations, meta-models are often the only way to quantitatively parameterize the agent-based model. However, methods that capture not only deterministic relationships but also stochastic uncertainty are rare. Since GANs are well known for their property to generate pseudo-random-numbers for complex distributions, we explore pros and cons of this strategy for modeling a delay-process in a large-scale agent-based SARS-CoV-2 simulation model. pdf Simulation and Artificial IntelligenceNeural Networks Chair: Spyros Garyfallos (UPC, Amazon Web Services)
Service Level Prediction In Non-Markovian Nonstationary Queues: A Simulation-Based Deep Learning Approach Spyros Garyfallos (Universitat Politècnica de Catalunya, Amazon Web Services); Yunan Liu (North Carolina State University); and Pere Barlet-Ros and Alberto Cabellos-Aparicio (Universitat Politècnica de Catalunya) Tags: data driven, machine learning, Monte Carlo, neural networks, student Abstract AbstractEmpirical studies have shown that real-life queueing systems, such as contact centers, exhibit non-Markovian and nonstationary behaviors. Consequently, analyzing their performance poses significant challenges. In this paper, we propose a simulation-based autoregressive deep learning algorithm (SADLA) for predicting service levels in non-Markovian, nonstationary queueing systems. Our method leverages modern recurrent neural networks, which are trained on synthetic data to capture the intrinsic spatio-temporal characteristics of queueing systems. Our findings demonstrate that SADLA achieves high prediction accuracy while reducing computational complexity by six orders of magnitude compared to traditional simulation methods. The implications of our research extend beyond accurate queue performance analysis; by embracing the learning capabilities of neural networks, our approach contributes to the advancement of the overall performance and resilience of real-life service systems. pdfLearning Payment-Free Resource Allocation Mechanisms Sihan Zeng, Sujay Bhatt, and Eleonora Kreacic (JPMorgan AI Research); Parisa Hassanzadeh (Samsung SDS); and Alec Koppel and Sumitra Ganesh (JPMorgan AI Research) Tags: data driven, machine learning, neural networks, optimization, industry Abstract AbstractWe consider the design of mechanisms that allocate limited resources among self-interested agents using neural networks. Unlike the recent works that leverage machine learning for revenue maximization in auctions, we consider welfare maximization as the key objective in the payment-free setting. Without payment exchange, it is unclear how we can align agents' incentives to achieve the desired objectives of truthfulness and social welfare simultaneously, without resorting to approximations. Our work makes novel contributions by designing an approximate mechanism that desirably trade-off social welfare with truthfulness. Specifically, (i) we contribute a new end-to-end neural network architecture, ExS-Net, that accommodates the idea of "money-burning" for mechanism design without payments; (ii)~we provide a generalization bound that guarantees the mechanism performance when trained under finite samples; and (iii) we provide an experimental demonstration of the merits of the proposed mechanism. pdfArtificial Intelligence and Simulation for Enhanced Pilot Training Larry Lowe, Luis Rabelo, Marwen Elkamel, Mitchell Hunsucker, Katalina Arias Marin, Nathalia Davila, Omar Allaz, Mario Marin, and Gene Lee (University of Central Florida) Tags: agent-based, machine learning, military, AnyLogic, student Abstract AbstractThis paper presents an innovative framework that integrates Virtual Constructive (VC) simulation and Convolutional Neural Networks (CNN) into an Agent-Based Model (ABM) to enhance pilot performance. By leveraging the strengths of VC for immersive training scenarios and CNN for advanced image recognition and decision-making processes, the research aims to provide a comprehensive understanding of how artificial intelligence and machine learning can revolutionize pilot training programs. Through a series of experiments within the ABM, the paper demonstrates the potential of this integrated approach to significantly improve decision accuracy and response times under simulated operational conditions. The findings highlight the effectiveness of integrating VC, and CNN, into an ABM for training simulations and can pave the way in developing complex environments for training and capturing operator behavior. pdf Simulation and Artificial IntelligenceNovel Methodology Chair: Ruihan Zhou (China)
SimDiff: Modeling and Generation of Stochastic Discrete-Event Simulation Inputs via Denoising Probabilistic Diffusion Process Fengwei Jia and Hongli Zhu (Tsinghua Shenzhen International Graduate School); Fengyuan Jia (Anhui Province Key Laboratory of Special Heavy Load Robot, Anhui University of Technology); Xinyue Ren, Siqi Chen, and Hongming Tan (Tsinghua Shenzhen International Graduate School); and Wai Kin Victor Chan (Tsinghua Shenzhen International Graduate School, International Science and Technology Information Center) Tags: data driven, discrete-event, machine learning, neural networks Abstract AbstractThis paper introduces Simulation using Diffusion process (SimDiff), a novel framework for automated modeling and generation of stochastic discrete-event simulation (DES) input distributions, addressing the high entry barrier and challenges associated with obtaining accurate input data. Traditional DES models often rely on simplifying assumptions, such as Poisson and Exponential distributions, which may not fully capture the complexity of real-world systems. We propose SimDiff to overcome these limitations by utilizing the denoising probabilistic diffusion model, a generative neural network capable of learning complex statistical distributions and efficiently sampling from them. Additionally, we introduce SimDiff-ConvTrans, an extension that incorporates Transformer and Convolution components for simulating non-i.i.d inputs. Our experiments demonstrate the effectiveness of SimDiff in handling simple and hybrid data distributions, as well as empirical datasets. SimDiff represents a significant advancement in simplifying the stochastic simulation process, making it more accessible and efficient for users across various expertise levels. pdfValidation towards Realistic Synthetic Datasets in Production Planning Jan Michael Spoor (Karlsruhe Institute of Technology), Marvin Matthes and Martin Krockert (University of Applied Sciences Dresden), and Jens Weber (Baden-Wuerttemberg Cooperative State University Lörrach) Tags: machine learning, neural networks, manufacturing, student Abstract AbstractFor large-scale simulations, a sufficient data amount is required. Despite an increasing data availability, it is still challenging to gather large-scale datasets, which are comprehensive, correct, accessible, and realistic, to validate new algorithms and models. An alternative is the use of synthetic data. Thus, we propose a novel methodology to generate realistic datasets. Based upon the statistical properties of real-world data, synthetic datasets are generated by ML models and filtered for anomalous values. The generated datasets are then compared to find the most suitable one. For this validation procedure, a modified Hopfield neural network model is extended to enable an analysis of sequences and to derive a comparison metric. The method demonstrates its applicability by providing an in-depth comparison of all tested data generators using a real-world dataset of a mid-size manufacturing company, whereby transformer-based generators proved most suitable. More diverse use cases should be evaluated in future research. pdfFixed-precision Ranking and Selection as Markov Decision Process Ruihan Zhou and Yijie Peng (Peking University) Tags: machine learning, ranking and selection, Python Abstract AbstractIn this study, we conceptualize the fixed-precision ranking and selection (R&S) problem as a stochastic control problem and subsequently model it using Markov Decision Process (MDP). Our approach aims to study the fixed-precision paradigm of R&S within a stochastic dynamic programming framework. To address the fixed-precision R&S challenge, we employ AlphaRank, an innovative artificial intelligence technique initially developed by y Zhou et al. (2024) for tackling fixed-budget R&S problems. This procedure intelligently handles learning and decision-making through deep reinforcement learning, thereby addressing the R&S problem where the mean differences between various alternatives tend to approach zero. We use a numerical example to illustrate the efficacy of AlphaRank in solving fixed-precision R&S problems. Notably, this method mitigates, to a certain extent, the issues faced by traditional fixed-precision programs, which often require excessive sampling to reach a specified accuracy level. pdf Simulation and Artificial IntelligenceGenerative AI and Generative Agents Chair: Joy Datt (MITRE Corporation)
Application of Generative Artificial Intelligence for Epidemic Modeling Hannah Danielle Ladera Villaplana and Jaeyoung Kwak (Nanyang Technological University), Michael Lees (University of Amsterdam), Hongyin Li (Agency for Science Technology and Research (A*STAR)), and Wentong Cai (Nanyang Technological University) Abstract AbstractEpidemic models have become increasingly useful, especially in the wake of the recent COVID-19 pandemic, emphasizing the crucial role of human behavior in the spread of disease. There has been a recent rise in the usage and popularity of generative artificial intelligence (GenAI), such as ChatGPT especially with its ability to mimic human behavior. In this study, we demonstrate a novel application of GenAI for epidemic modeling. We employed GenAI for creating agents living in a hypothetical town in simulations and simulating their behavior within the context of an ongoing pandemic. We performed a series of simulations to quantify the impact of agent traits and the availability of information for health condition, virus, and government guidelines on the disease spread patterns in terms of peak time and epidemic duration. We also characterized the most influential factors in agents’ decision-making using random forest model. pdf(Gen)AI versus (Gen)AI in Industrial Control Cybersecurity Cynthia Zhang, Ranjan Pal, Corwin Nicholson, and Michael Siegel (Massachusetts Institute of Technology) Abstract Abstract(Gen)AI is emerging as a powerful force transforming industrial control and business/enterprise productivity. This paper investigates the challenges and opportunities stemming from (Gen)AI on industrial control systems (ICSs) security within the framework of the Cyber Kill Chain (CKC). Leveraging the CKC framework, we examine how (Gen)AI enables attackers to automate each phase of the CKC – reconnaissance, weaponization, delivery, exploitation, installation, command and control, and action on objectives. Conversely, (Gen)AI also empowers defenders to employ advanced techniques such as AI-powered firewalls, anomaly detection, and automated incident response to thwart cyber threats effectively. We study how this defense dynamic using (Gen)AI operates within each phase of the Cyber Kill Chain for ICSs. We backup our attack-defense dynamics study with simulations on real-world ICS scenarios. To the best of our knowledge, this is the first cybersecurity study in the joint space of ICSs, the Cyber Kill Chain (CKC), and (Gen)AI. pdfLarge Language Model Assisted Experiment Design with Generative Human-Behavior Agents Haoyu Liu, Yifu Tang, Zizhao Zhang, Zeyu Zheng, and Tingyu Zhu (University of California, Berkeley) Abstract AbstractExperiment design, despite its wide use in the fields such as economics, sociology and business operations, can sometimes encounter challenges or ethical issues when getting real human beings involved. On the other hand, the development of large language models (LLM) such as ChatGPT has empowered the development of generative agents with believable human behavior. This work develops an implementation framework to use LLM-empowered generative agents to assist experiment design where it is prohibitive to involve real human beings. pdf Simulation and Artificial IntelligenceSimulation in Industrial Application Chair: Seongho Cho (Ajou University)
Surrogate Model for Distribution Networks Influenced by Weather Juan M. Restrepo (Oak Ridge National Laboratory, University of Tennessee); James Nutaro (Oak Ridge National Laboratory); Chris Sticht (Idaho National Laboratory); and Teja Kuruganti (Oak Ridge National Laboratory) Tags: data driven, discrete-event, machine learning, system dynamics, government Abstract AbstractWe propose a method for generating reduced representations of time series and for constructing low dimensional surrogate models for time dependent calculations of power and voltage in distribution networks. We employ
Fourier polynomials. The surrogate model strategy is aimed at reducing the computational cost of time dependent simulations, albeit, at the expense of fidelity. The reduced representation
is achieved by identifying a small and most consequential subset of degrees of freedom. In power and voltage distribution networks dynamics that are heavily influenced by strong cyclic weather events, {\it e.g.}, the hourly, diurnal and seasonal cycles, the weather/climate time series spectrum exposes these most energetic components. Once the degrees of freedom are identified their amplitudes are optimized using training data. The key challenge in using spectral methods in power network surrogates is addressing the computation of quotients. For this we propose a numerically-stable deconvolution strategy. pdfSimulation Based Cycle Time Prediction For Robot Welding Seongho Cho, Donguk Kim, and Sangchul Park (Ajou University) Tags: estimation, machine learning, manufacturing, Python, student Abstract AbstractThis paper introduces methodologies aimed at predicting the cycle time of robotic arm spot welding operations, which are essential for vehicle body assembly process plans. Predicting the cycle time of robot arm is crucial for process plan, as it is closely linked to overall production efficiency and safety considerations. However, it is common for companies in the vehicle body assembly industry to rely on rough estimates for cycle time prediction. We propose methodologies that ensure ease of use and accuracy based on simulation data to cope with this problem. This paper provides an overview of the overall process of each methodology, including data collection and model construction. Additionally, experiments are conducted to compare the performance of each methodology, with results indicating that our proposed approach outperforms conventional methods. Through this research, we found the potential for the development of advanced methods applicable in the industry. pdf
Simulation Around the World Track Coordinator - Simulation Around the World: María Julia Blas (INGAR (CONICET-UTN)), Stewart Robinson (Newcastle University), Theresa Roeder (San Francisco State University) Simulation Around the WorldDEVS Chair: María Julia Blas (INGAR (CONICET-UTN))
DEVS COPILOT: Towards Generative AI-Assisted Formal Simulation Modelling based on Large Language Models Best Contributed Applied Paper - Finalist Tobias Carreira-Munich, Valentín Paz-Marcolla, and Rodrigo Castro (Universidad de Buenos Aires) Abstract AbstractIn this paper we explore to which extent generative AI, in the form of Large Language Models such as GPT-4, can assist in obtaining a correct executable simulation model. The starting point is a high-level description of a system, expressed in natural language, which evolves through a conversational process based on user input, including suggestions for corrections. We introduce a methodology and a tool inspired by the metaphor of a copilot, a form of human-AI teaming strategy well known for its success in programming tasks.
We adopt the Discrete Event System Specification (DEVS), a suitable candidate formalism that allows general-purpose simulation models to be specified in a simple yet rigorous modular and hierarchical way. The result is DEVS Copilot, an AI-based prototype that we systematically test in a case study that builds several lighting control systems of increasing complexity. In all cases, DEVS Copilot succeeds at producing correct DEVS simulations. pdfSystematic Performance Optimization for the PowerDEVS Simulator and Models of Large-Scale Real-World Applications Ezequiel Pecker-Marcosig (FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)); Gerónimo Romczyk (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)); Matías Alejandro Bonaventura (CERN / Instituto de Ciencias de la Computación (ICC-CONICET)); and Rodrigo Daniel Castro (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)) Abstract AbstractAs simulation models grow in size and complexity, their performance comes under increasing pressure. In order to deliver results within an acceptable timeframe, new mechanisms are required that reduce the simulation performance burden while avoiding simplifications of model behavior. We present a systematic methodology for optimizing the PowerDEVS toolkit, improving both the core simulation engine and general-purpose behavioural models in a way that they stand as independent yet compatible tasks, thanks to the strict separation between simulator and models promoted by the DEVS formalism. We tested our optimizations on a variety of models that emphasize different effects. The main target system of this work is the large-scale network model of the Data Acquisition System (DAQ) at CERN, which is used to derive hardware requirements for future accelerator upgrades. Speedups for the DAQ model more than doubles
the number of experiments that can be run within the same amount of time. pdfA Modeling Framework for Complex Systems María Julia Blas and Silvio Gonnet (INGAR (CONICET-UTN)) Abstract AbstractThis paper presents a modeling framework for defining abstractions of real-world complex systems promoting the development of discrete-event simulation models based on DEVS. An ontology, a metamodel, and a reasoner are combined in one single structure to allow an upgrade of an abstraction model to an implementation model. Our motivation is to reduce the effort related to the modeling part when specifying DEVS models for complex systems described from an abstraction of reality built over a research question. Applications include an easier introduction to M&S for students of any scientific field that can define an abstraction model with an easier introduction to DEVS models (from formalization to implementation). pdf Simulation Around the World, Modeling MethodologyDigital Twins Chair: John Shortle (George Mason University)
Digital Twins for Picking Processes - Cases Developed by a Brazilian Consulting Company Leonardo Chwif and Wilson Pereira (Simulate), Bruno Santini (Luxottica), and Felipe Tomazin (ZF) Abstract AbstractThe “picking” process is a relatively common logistics process in distribution centers: according to the picking list, products are separated and packed in specific containers (usually boxes) where customer orders are consolidated. The vast majority of picking processes involve intensive use of labor. Due to the complexity of this process, simulation is an extremely suited tool for its correct performance evaluation. This article will depict some applications of digital twin simulations for picking processes in practical cases for daily picking operation evaluations. pdfPort Management Digital Twin and Control Tower Integration: an Approach to Support Real-time Decision Making Alice Oliveira Fernandes (UNICAMP, Belge Smart Supply Chain) and Daniel Gutierres, Marcelo Fugihara, and Bruno de Norman (Belge Smart Supply Chain) Abstract AbstractDiscrete event simulation plays a pivotal role in facilitating decision-making within logistics, necessitating real-time initiation based on the current state of the system. The architecture outlined in this article integrates a real-time Digital Twin with simulation logic and a Control Tower into a cohesive model, thereby reducing offline efforts and runtime. This paper is to presents a groundbreaking project in a Brazilian port. The holistic approach of it offers a comprehensive overview of port operations and enables predictive insights for up to 72 hours in advance. Beyond enhancing operational efficiency, it promotes proactive decision-making and adaptive resource allocation, marking a paradigm shift in port management. The integration of real-time feedback and dynamic optimization algorithms can further enhance the responsiveness and adaptability of the system to changing operational conditions. Future development lies in enhancing the predictive analytics capabilities of the model by leveraging machine learning algorithms and advanced analytics techniques. pdfA High Extensible Modeling Method using Three-layer Component-based Architecture Yuan Haozhe, Yao Yiping, Tang Wenjie, and Zhu Feng (National University of Defense Technology) Tags: composability, complex systems, student Abstract AbstractExtensibility and reusability are important yet competing objectives in the modeling process. Despite the progress made by current modeling methodologies, they tend to be limited by either one-way control transfers or static data linkages. Considering these limitations, we introduce a novel three-layer, progressive, composable modeling architecture that divides the system model into three layers: components, entities, and systems. It incorporates behavior trees for the assembly of functional components into entities and adopts a publish-subscribe communication paradigm to dynamically establish interactions among entities. Case studies confirmed that this approach facilitates the efficient development of simulation models. Moreover, it allows for the agile adaptation of entity behavior models and their interconnections, ensuring robust extensibility and optimizing the reuse of models as simulation requirements evolve. pdf Simulation Around the World, Reliability Modeling and SimulationApplications Chair: Antuela Tako (Nottingham Trent University)
Modeling and Simulation of Battery Recharging for UAV Applications: Smart Farming, Disaster Recovery, and Dengue Focus Detections Leonardo Grando and Juan Fernando Galindo Jaramillo (Campinas State University); Jose Roberto Emiliano Leite (Campinas State University, Fundação Hermínio Ometto); and Edson Luiz Ursini (Campinas State University) Abstract AbstractThe applications of Unnamed Aerial Vehicles (UAVs) or Drones have been increasing in areas such as Smart Farming, Disaster Recovery, and combat of tropical mosquito diseases such as Dengue. Due to the short duration of the electrical battery capacity, at most 20 to 30 minutes in some cases, most UAVs have low battery capacity to carry out missions. This work presents two contributions: i) a description of the characteristics observed in three drone applications (agricultural, disaster, and against dengue disease), and ii) the creation of an Agent-Based Simulation Model considering energy supply simulation. This model considers that the agents will not collude about their recharging decisions. pdfA Discrete-event Simulation Model for Terminal Capacity Planning in an Indonesian Container Port Alicia Bernadine Pandy and Niniet Indah Arvitrida (Institut Teknologi Sepuluh Nopember) Abstract AbstractThis study examines a case study of a container terminal in Surabaya, Indonesia. There are three types of equipment, which are container cranes, RTG cranes, and head trucks. Despite the availability of these resources, their utilization is still low. To address this issue, given the system’s inherent complexity and the interdependencies among its components, a discrete-event simulation modeling approach is used. The simulation model explores the potential scenarios for improvements and evaluate the best alternatives to improve the equipment utilization while taking into account the quantity of ships and containers processed. The experiments also consider the combination of equipment assignments during the loading-unloading activities. The results indicate that the increase in equipment utility is not strictly proportional to the increase in the number of containers and ships. In particular, increases in the number of containers and ships do not result in a proportional, linear correlation with equipment utilization rates. pdfUnderstanding Energy Consumption Trends in High Performance Computing Nodes Jonathan Muraña (UDELAR, Uruguay); Juan J. Durillo (Leibniz-Rechenzentrum,); and Sergio Nesmachnow (UDELAR, Uruguay) Tags: estimation, environment Abstract AbstractThis article presents a study of energy consumption behavior in high performance computing nodes in
relation to the usage of computing resources. Linear models are constructed for identifying common
patterns or differences in energy consumption across different architectures. The study is significant as it
provides insights into the energy consumption of computing nodes, helping to build simple, yet useful and
transparent models. Moreover, the methodology provides building blocks for building new models with
high predictive quality and broad applicability across different architectures. The results reveal similarities
in energy consumption across different architectures when compared in terms of CPU cycles and cache
misses. Additionally, employing linear models based on CPU cycles and cache misses allows for both the
explanation of energy consumption behavior and the achievement a reasonable predictive quality. Overall,
a partition-based linear model outperforms a global linear and a mean partition-based model by up to 5.7% pdf
Simulation as Digital Twin Track Coordinator - Simulation as Digital Twin: Giovanni Lugaresi (KU Leuven), Jie Xu (George Mason University) Simulation as Digital TwinDigital Twin Calibration and Validation Methods Chair: Joost Mertens (University of Antwerp)
Data Assimilation for Online Calibration of Simulation Digital Twin – a Case Study with Multiple Model Parameters Xiaolin Hu and Mingxi Yan (Georgia State University) Abstract AbstractCalibrating a simulation digital twin using real-time observation data is a key requirement for making the simulation model aligned with a physical system under study. This paper applies a particle filter-based data assimilation framework to online calibration of simulation digital twin, with a focus on calibrating multiple model parameters. An overview of the problem formulation and the particle filter-based data assimilation is provided. The combination effect of multiple model parameters and its impact on multi-parameter calibration is discussed. Experiment results based on a simulation case study example demonstrate the effectiveness of the data assimilation for online model calibration of simulation digital twin models. pdfSimulation-based Digital Twins: an Accreditation Method Carlos Henrique dos Santos (Federal University of Alfenas, Federal University of Itajubá) and José Arnaldo Barra Montevechi, Afonso Teberga Campos, Rafael de Carvalho Miranda, José Antonio de Queiroz, and João Victor Soares do Amaral (Federal University of Itajubá) Tags: discrete-event, validation, digital twin Abstract AbstractThe simulation-based Digital Twin (DT) has been gaining prominence in recent years and represents a revolution in decision-making processes. In this context, increasingly fast and efficient decisions are made by mirroring the behavior of physical systems and using advanced analysis techniques. On the other hand, this article draws attention to the challenges of ensuring the accreditation of simulation models over time, as traditional approaches do not consider the periodic updating of the model. Therefore, this work proposes an approach based on the periodic evaluation of these models using Machine Learning and control chart. More specifically, the authors adopted the K-Nearest Neighbors (K-NN) classifier, combined with a p-chart. The proposed approach has been tested in theoretical and real case studies, allowing us to monitor the DT results and ensuring its accreditation. The broad applicability of the proposed tool is highlighted, which can be used in simulation-based DTs with different characteristics. pdfLocalizing Faults in Digital Twin Models by Using Time Series Classification Techniques Joost Mertens and Joachim Denil (University of Antwerp) Tags: cyber-physical systems, digital twin, Matlab, Python, student Abstract AbstractTo provide its services, a digital twin system relies on models whose behavior adequately simulates/mimics that of the twinned system. Deviations between the behaviors necessitate (user) intervention to realign the model and twinned system. We can intervene in the model (e.g. reinitialization, reparametrization) or in the twinned system (e.g. physical adjustment, replacement of parts). In either case, the taken action depends on the cause of the deviation. One way to find the cause is to use a classifier on the digital twin system's data. In a digital twin system, the data is generally a time-series of states, inputs and outputs. In this paper, we apply a state-of-the-art time series classification technique to detect and classify faults in a digital twin system of a scale-model gantry crane. We find that a classifier trained on simulation data yields adequate performance when applied on to the real system. pdf Simulation as Digital TwinApplications of Digital Twins Chair: Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark)
Tools, Capabilities and Experience of Digital Twins at the MITRE Corporation Edward Y. Hua, Jon Cline, Robert Wittman, and Bianica Pires (The MITRE Corporation) Abstract AbstractThe technology of Digital Twin (DT) engineering is being increasingly leveraged in the industry for a wide range of applications. More recently, many departments in the U.S. government are exploring the use of DT that can support their respective mission mandates. The MITRE Corporation, as a Federally Funded Research and Development Center (FFRDC), is actively developing DT tools and capabilities to meet the needs of its government customers. While more people are becoming familiar with the term “Digital Twin,” not everyone has a clear understanding of what it is, and how it is implemented in different applications. In this paper, we present some of these DT tools and capabilities, as well as some MITRE projects that have benefitted from the DT toolset. Additionally, we discuss several nascent areas where DT may bring greater value to MITRE customers. pdfSimulation for an Energy-Efficient Semiconductor Manufacturing Network Hans Ehm, Abdelgafar Ismail Mohammed Hamed, Rahul Gurudatt Nayak, Saskia Serr, and Tristan Scheuermann (Infineon Technologies AG) Tags: discrete-event, semiconductor, supply chain, AnyLogic, industry Abstract AbstractThe semiconductor industry plays an important role in reducing carbon emissions and facilitating CO2 savings through energy-saving applications, while also generating a significant CO2 footprint during the manufacturing process of chips. Despite this, microchips in end-applications have the potential to save CO2 several times greater than the emissions generated. Chip manufacturers are required to be transparent in reporting the CO2 emissions and customers are demanding to reduce the footprint and to increase the handprint. In this paper, we present a simulation study which analyzes the energy consumption of two exemplary semiconductor products to assess the ratio of fixed and variable energy consumption. In addition, the paper analyzes the most energy consuming processes and the linkage between the capacity utilization and the energy consumption in a typical semiconductor facility. The paper suggests for future work to analyze the handprint as well and to investigate the relation between the footprint and handprint. pdfModular Validation Within Digital Twins: A Case Study in Reliability Analysis of Manufacturing Systems Ashkan Zare (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Tags: data driven, validation, digital twin, manufacturing, student Abstract AbstractAs manufacturing rapidly evolves, optimizing processes is essential. Digital Twins, which act as near real-time virtual replicas of the corresponding real-world systems, can support this optimization by providing insights and supporting decision-making. Digital Twins can only be fully effective if their underlying models continuously and accurately reflect the corresponding physical systems. However, not all model components change at the same pace, and relevant data updates also vary in frequency. Thus, Digital Twins require robust validation mechanisms that can identify what parts of models need to be re-extracted, what parts need to be recalibrated, and what parts need to remain same. This is a complex task that necessitates precise partitioning of models with respect to the above noted considerations. Here, we propose a novel approach to modular validation, aimed at supporting Digital Twins. To illustrate our approach, we provide a case study in reliability analysis of manufacturing systems. pdf Simulation as Digital TwinDigital Twins and Manufacturing Chair: Guodong Shao (National Institute of Standards and Technology)
Building a Digital Twin of a CNC Machine Tool Deogratias Kibira (National Institute of Standards and Technology, University of Maryland) and Guodong Shao, Rishabh Venketesh, and Matthew Triebe (National Institute of Standards and Technology) Abstract AbstractDigital twin technology can positively transform manufacturing decision-making including supporting optimal utilization of machine tools on the shop floor. However, building digital twins faces several challenges, such as, data management, data security, connectivity, and insufficient standardized modeling procedures. Recently, the International Organization for Standardization (ISO) published a new series of standards, ISO 23247 Digital Twin Framework for Manufacturing, to provide guidance for implementing these aspects of digital twins in a manufacturing environment. This paper provides a case study of using the ISO framework to build a digital twin of a Computer Numerical Control (CNC) machine tool. This research demonstrates how the ISO standard, system modeling, machining data standards, messaging protocols, data processing, and data visualization tools support the creation of a digital twin of a CNC machine tool. The approach of this paper can be used by manufacturers to implement their digital twin applications. pdfModeling Operational Control in Discrete-Event Logistics Systems and their Digital Twins Lorenzo Ragazzini (Politecnico di Milano), Leon McGinnis (Georgia Institute of Technology), and Elisa Negri and Marco Macchi (Politecnico di Milano) Abstract AbstractIn manufacturing systems, operational control plays a crucial role in ensuring proper functioning and efficiency. Traditional approaches to modeling operational controllers in simulation often suffer both from inability to properly represent decisions as they are made in real systems and from rigidity and lack of adaptability to changing system requirements and configuration during their design. This paper explores a novel approach to designing and implementing operational controllers in discrete-event simulation based on the introduction of control policies as standardized structures that explicitly integrate operational control decision-making into DES models. Standard structures allow for easy integration with optimization tools, facilitating the exploration of different control policies during the design of a new system. The use of control policies also has important implications for the development of digital twins of manufacturing systems, as it helps align the way systems control is designed with the control of real systems. pdfTowards Standardizing the Integration of Digital Twins in Manufacturing Systems Hedir Oukassi (University of Tunis El Manar), Amel Jaoua (National Engineering School of Tunis), Soumaya Yacout (École Polytechnique de Montréal), Elisa Negri (Politecnico di Milano), and Mehdi Jaoua (Independent Researcher) Abstract AbstractIndustrial sectors are increasingly prioritizing the integration of Digital Twin (DT) technology into their operations to enhance manufacturing system decision-making. However, implementing DTs remains highly challenging due to its novelty, highlighting the pressing need for the development of more formal methodologies to guide its effective implementation. Therefore, the aim of this paper is to present the ongoing research concerning DT design and implementation in accordance with the ISO 23247 standard. The objective of this DT is to embed real-time Autonomous Mobile Robot dispatching decision in a small-scale production system. The present work outlines the essential steps for transitioning from a traditional Discrete Event Simulation model to a Real-Time Simulation, ensuring connectivity and synchronization with the physical system to achieve efficient implementation. It also exhibits the benefit of using the rigorous formalism of UML, clarifying the complex sequencing involved in capturing and transmitting real-time sensor data between physical and cyber system. pdf Simulation as Digital TwinDigital Twins and Logistics Chair: Cathal Heavey (University of Limerick)
Supply Chain Digital Twin Framework for Hybrid Manufacturing Strategy Selection: a Case Study from the Semiconductor Industry Amir Ghasemi (Karlsruhe Institute of Technology); Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark); and Cathal Heavey (University of Limerick) Abstract AbstractWithin the Manufacturing Supply Chain planning domain, the integration of Digital Twins as a decision-making tool presents a promising development path. This research introduces a novel Supply Chain Digital Twin (SCDT) framework for manufacturing supply chains, specifically tailored to address the manufacturing strategy selection problem at the strategic product level. To demonstrate our proposed SCDT framework, we employ the Business Process Model and Notation (BPMN) as a model-based systems engineering tool. The primary aim of this study is to detail the design and integration of SCDTs within manufacturing supply chainnetworks, facilitating decisions on manufacturing strategy selection. The practical applicability of our
proposed SCDT framework is further demonstrated through a case study in the semiconductor industry, highlighting its utility and potential benefits. pdfA Digital Twin-based Simulator for Small Modular and Microreactors Zavier Ndum Ndum, Jian Tao, Yang Liu, John Ford, Viktor Vlassov, Noah Morton, Johnathan Grissom, Pavel Tsvetkov, and Simon Adu (Texas A&M University) Tags: system dynamics, conceptual modeling, digital twin, Matlab, student Abstract AbstractThis paper reports preliminary work done on a mechanistic–based model digital twin (DT) for Gen IV reactors. The case study is a conceptual 4.5 MWth Small Modular Lead-cooled Fast (LFR) Research Reactor whose design incorporated aspects from all the three existing families of Gen IV LFRs. The back end for the DT exploited a modular approach consisting of the Neutronics and Thermohydraulic Coupling of the reactor core. This modular approach gives room for subsequent modification and/or addition of new blocks as the design concept matures without perturbing the entire system. After benchmarking simulation results with data from literature, the system’s GUI demonstrated the capability to perform and visualize common operational transients either as a stand-alone simulator or in real-time using the MQTT broker. Insights derived from this virtual environment could contribute towards the ongoing refinement of LFR technology thus accelerating development through design testing, visualization, and optimization. pdfThe Role of Financial Digital Twin in the Supply Chain Management Anjali Vaghani (Technische Universität Dortmund); Zhaoqing Gong (RIF Institut für Forschung und Transfer e.V.); and Michael Henke (Technische Universität Dortmund; Fraunhofer Institute for Material Flow and Logistics, Dortmund) Tags: digital twin, supply chain, student Abstract AbstractAs supply chains undergo accelerated process of globalization and gain complexity, it becomes evident that conventional approaches to financial management are no longer adequate. The reliance on historical data, traditional budgeting, and delayed insights impede the accuracy of financial forecasting, management of cash flow, and mitigation of risk. This paper presents an exploration of the concept of a financial digital twin, which effectively serves as a digital representation of the financial processes of a supply chain by integrating near real-time data. The objective of this paper is to examine the limitations of traditional financial management practices and to examine how utilization of digital twin can revolutionize the practice of financial management. Additionally, the paper investigates the benefits and potential application of financial digital twins within supply chain. The paper is designed to provide experts with comprehensive understanding of the technology that holds the potential to transform financial supply chain management. pdf Simulation as Digital TwinAdaptive Digital Twins Chair: Giovanni Lugaresi (KU Leuven)
Integrating Dynamic Digital Twins: Enabling Real-Time Connectivity for IoT and Virtual Reality Lejla Erdal, Ammar Gubartalla, Paulo Lopes, and Huizhong Cao (Chalmers University of Technology); Guodong Shao (National Institute of Standards and Technology); Per Lonnehed (PTC); Henri Putto and Abbe Ahmed (Rockwell Automation Inc.); and Sven Ekered and Björn Johansson (Chalmers University of Technology) Abstract AbstractWhen Industry 4.0 technology is applied, it can increase resilience in production systems with interconnected machines and workers. As it is still evolving, research and implementation gaps due to issues such as costs, security, and a lack of use cases and domain expertise for enabling Digital Twins, need to be bridged. This paper proposes a framework connecting an Internet of Things platform with Digital Twin-compliant software based on ISO 23247. The simulation software provides cognitive support for the user who is immersed in the Digital Twin through Virtual Reality, and provides applications such as remote control and exploration of what-if scenarios. Connectivity with an Internet of Things platform facilitates real-time bi-directional communication, collaboration, monitoring, and assistance. A proof-of-concept use case based on an assembly line for a lab-scale drone factory was used for validation of the deployed Digital Twin based on ISO 23347. pdfImparting Adaptiveness and Resilience to Parcel Delivery Networks: A Digital Twin Centric Simulation Based Approach Souvik Barat, Abhishek Yadav, Himabindu Thogaru, Vinay Kulkarni, and Kaustav Bhattacharya (Tata Consultancy Services Ltd) Tags: agent-based, complex systems, digital twin, logistics Abstract AbstractContemporary parcel delivery companies face a significant surge in demand that comes with increased customer expectations for flawless and timely delivery in a shrinking window of opportunity to be met in an increasingly competitive world while facing a variety of micro and macro level uncertainties. Current industry practice relying on localized analysis to meet these expectations has turned out ineffective. This paper argues that imparting adaptiveness and resilience to parcel delivery network is the key to a pragmatic solution. It presents a holistic approach based on simulatable digital twins and composable agents to enable “in silico” business experimentation wherein a set of what-if scenarios are simulated to help evaluate efficacy of current strategy and identify suitable modifications to the strategy if necessary. The paper illustrates the proposed approach on a case study from the parcel industry and demonstrates its utility and efficacy on a set of real-life scenarios. pdfReal-Time Tracking of Production in Assembly Operations Using Agent-Based Modeling and Digital Twin Techniques Michail Katsigiannis and Konstantinos Mykoniatis (Auburn University) Tags: agent-based, digital twin, manufacturing, AnyLogic, student Abstract AbstractThe integration of Digital Twins into manufacturing environments unlocks significant value through a range of benefits. In this work, we demonstrate the digitalization of a physical system. The chosen system is an emulated automotive assembly process, for which we create a virtual representation of both the physical system's state and the product itself using Agent-Based Modeling techniques. To twin the physical and virtual worlds, we leverage real-time data collected from automation systems and Internet of Things devices. The motivation for this work lies behind the need to create Digital Twin models that can be used to integrate bidirectional data flows into industrial simulation models. pdf Data Science for Simulation, Simulation as Digital Twin, Simulation OptimizationCross-Track Session 2: Methods Chair: Jinbo Zhao (Texas A&M University)
Simulation Optimization with Non-Stationary Streaming Input Data Songhao Wang (Southern University of Science and Technology), Haowei Wang (Rice-Rick Digitalization PTE. Ltd.), Jianglin Xia (Southern University of Science and Technology), and Xiuqin Xu (McKinsey & Company) Tags: data driven, input modeling, optimization, student Abstract AbstractSimulation optimization has become an emerging tool to design and analysis of real-world systems. In stochastic simulation, input distribution is a main driving force to account for system randomness. Most existing works on input modeling focus on stationary input distributions. In reality, however, input distributions could experience sudden disruptive changes due to external factors. In this work, we consider input modeling through non-stationary streaming input data, where the input data arrive sequentially across different decision stages. Both the parameters of the input distributions and the disruptive change points are unknown. We use a Markov Switching Model to estimate the non-stationary input distributions, and design a metamodel-based approach to solve the following optimization problem. The proposed metamodel and optimization algorithm can utilize the simulation results from all the past stages. A numerical study on an inventory system shows that our algorithm can solve the problem more efficiently compared to common approaches. pdfCatalyzing Intelligent Logistics System Simulation with Data-driven Decision Strategies Shiqi Hao, Yang Liu, Yu Wang, Xiaopeng Huang, Muchuan Zhao, and Xiaotian Zhuang (JD Logistics, Inc.) Tags: machine learning, logistics, Java, Python, industry Abstract AbstractMachine learning is becoming an important technique in modern simulation systems due to the strong capability on capturing the random, complex, and dynamic features of the physical world. Based on these advantages, it has been employed as a powerful tool that enables the intelligent simulation of large-scale logistics systems in a highly efficient manner. Inspired by these applications, this work presents a new paradigm, where machine learning is utilized to generate data-driven decision strategies to accurately emulate the practical operations in logistics systems, and improve the simulation accuracy. Compared with existing approaches, the proposed method is also characterized by the high flexibility and transparency. Consequently, it can adapt to a large variety of logistics system architectures, and catch adequate details of system dynamics. Experiments have been conducted based on the simulation of large-scale real-world logistics systems, where the proposed method demonstrates superior accuracy on both strategy learning and simulation. pdfA Framework for Digital Twin Collaboration Zhengchang Hua (Southern University of Science and Technology, University of Leeds); Karim Djemame (University of Leeds); Nikos Tziritas (University of Thessaly); and Georgios Theodoropoulos (Southern University of Science and Technology) Tags: agent-based, distributed, complex systems, digital twin, student Abstract AbstractDigital Twins (DTs) have emerged as a powerful tool for modeling Large Complex Systems (LCSs). Their strength lies in the detailed virtual models that enable accurate predictions, presenting challenges in traditionally centralized approaches due to the immense scale and decentralized ownership of LCSs. This paper proposes a framework that leverages the prevalence of individual DTs within LCSs. By facilitating the exchange of decisions and predictions, this framework fosters collaboration among autonomous DTs, enhancing performance. Additionally, a trust-based mechanism is introduced to improve system robustness against poor decision-making within the collaborative network. The framework's effectiveness is demonstrated in a virtual power plant (VPP) scenario. The evaluation results confirm the system’s objectives across various test cases and show scalability for large deployments. pdf Simulation as Digital TwinDigital Twins and Metaverse Chair: Alexander Wuttke (TU Dortmund University)
Industrial Metaverse in Supply Chain Management: Applications, Concepts, and Open Research Paths Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) and Julia Kunert, Niklas Harke, and Katharina Langenbach (TU Dortmund University) Tags: cyber-physical systems, digital twin, logistics, manufacturing, supply chain Abstract AbstractThis research study examines the applications of the industrial metaverse in supply chain management. Thus, it reviews and analyzes the up-to-date knowledge along the Supply Chain Operations Reference Model. Therefore, a structured literature review is conducted. We derive six core functionalities of the industrial metaverse in supply chains: visibility and monitoring, prediction, simulation, collaboration, training, and optimization. Furthermore, their presence along the different phases of a supply chain is investigated. Besides the functionalities, the relationship between the industrial metaverse and simulation applications and digital twins is analyzed, and a brief description of possible metaverse architectures is provided. This study additionally derives research gaps within the emerging field of metaverse applications and research and defines paths to tackle them. pdfPotentials and Barriers of the Metaverse for Circular Economy Julia Kunert, Hendrik van der Valk, and Hannah Scheerer (TU Dortmund University) and Christoph Hoppe (Fraunhofer Institute for Software and Systems Engineering ISST) Tags: big data, digital twin, environment Abstract AbstractSustainability is a challenge for society that circular economy tries to tackle. The metaverse, as an emerging technology that incorporates digital twins and simulation in an immersive virtual environment, has not been thoroughly investigated in connection to circular economy. Thus, the purpose of this study is to summarize the potentials and barriers of the use of the metaverse for circular economy. By conducting a structured literature review, this paper categorizes the findings into dimensions that are important for both the metaverse and circular economy. A variety of potentials and barriers that cover different perspectives important for businesses aiming to comply with circular economy principles is discovered. The findings include potentials and barriers in several areas, like the access to the metaverse, connected costs, data, knowledge transfer, collaboration, innovation, product design, production planning, training of employees, and transportation. The results can be used to promote the implementation of circular economy principles. pdf Simulation as Digital TwinData-Driven Modeling for Digital Twins Chair: Jie Xu (George Mason University)
Extending Simulation Modeling Methodology for Digital Twin Applications Bhakti Stephan Onggo and Christine Currie (University of Southampton, CORMSIS) Abstract AbstractThe number of reported works in Digital Twin (DT) has significantly increased in recent years. A fundamental component of a DT is the digital representation of a real object, process, or system that decision makers wish to evaluate or manage. One of the most utilized digital representations is a simulation model. Traditionally, simulation has been primarily employed for offline applications. However, the introduction of DT technology has enabled an effective online connection between a simulation model and its real counterpart, allowing the use of simulation as part of dynamic control systems for real-time decision making. This paper discusses whether steps in simulation modeling methodology such as conceptual modeling, input modeling, model development, validation, and output analysis, need adjusting for DT applications. pdfLearning Simulation-based Digital Twins for Discrete Material Flow Systems: A Review Christian Schwede (Hochschule Bielefeld, Fraunhofer Institute for Software and Systems Engineering ISST) and Daniel Fischer (Hochschule Bielefeld) Tags: data driven, discrete-event, process mining, logistics, manufacturing Abstract AbstractDigital Twins (DTs) play a crucial role in the fourth industrial revolution. In the context of discrete material flow systems, companies under constant competitive pressure seek solutions to minimize costs and maximize performance. Simulation-based DTs can help make optimal decisions in the design, planning, and control of these systems. Such DTs are until now created and updated by domain experts producing costs and are often not considering the advances made in machine learning to improve prediction quality. Learning DTs out of data could be the solution for a broader application. A lot of work has already been done that contributes to this endeavor, yet relevant building blocks originate from different scientific areas resulting in the use of different terminology. Thus we present a holistic review of relevant work and analyze the state of the art based on a new classification scheme deriving relevant building blocks and gaps for future research. pdfProcess Mining as Catalyst of Digital Twins for Production Systems: Challenges and Research Opportunities Giovanni Lugaresi (KU Leuven) Tags: data driven, conceptual modeling, digital twin, Python Abstract AbstractThe advantages in terms of productivity from recent investments toward higher automation come along with an escalating complexity in manufacturing systems. This prompts for leveraging digital support assistants, notably digital twins, for production planning and control tasks. Process mining has emerged as a valuable tool in the realm of digital twinning as it proved its efficacy in tasks such as model generation, trace profiling, and performance evaluation. However, several challenges persist in different methodological and application areas, presenting valuable opportunities for both academia and industry. This work aims to shed light on a selection of topics uniting the fields of process mining and digital twins for manufacturing systems, with a reflection on current challenges and research opportunities that can be seized in the near future. pdf
Track Coordinator - Simulation in Education: Omar Ashour (Penn State University), Susan R. Hunter (Purdue University), Ashkan Negahban (Pennsylvania State University) Simulation in EducationInteractive Simulation Tools in Education Chair: Martijn Mes (University of Twente)
The Impact of Immersion Level When Learning Optimization Concepts via a Simulation Game Saurav Bandi (Penn State Great Valley), Sabahattin Gokhan Ozden (Penn State Abington), Omar Ashour (Penn State Behrend), and Ashkan Negahban (Pennsylvania State University) Tags: optimization, education Abstract AbstractSimulation can be employed as an interactive computer game to enable game-based learning. Educational simulations can also be combined with immersive technologies such as virtual reality (VR) to enhance student engagement and learning. While recent years have seen significant growth in the use of immersive technologies in education, the role and contribution of the additional immersion offered by VR still needs to be explored. This paper aims to address this gap by comparing low- and high-immersion modes for a simulation game to familiarize students with the fundamental concepts of mathematical optimization. The game resembles performing a heuristic search on the solution space for an optimization problem and involves finding the highest peak in an arctic landscape. Our research experiments include three groups of students who play the game either in VR, desktop mode, or PowerPoint slides. Our statistical comparisons show that VR enhanced students’ sense of presence and learning. pdfUnderstanding Optimal Interactions between Students and a Chatbot during a Programming Task Jinnie Shin, Laura Cruz-Castro, Zhenlin Yang, Gabriel Castelblanco, Ashish Aggarwal, Walter Leite, and Bruce Carroll (University of Florida) Tags: data analytics, machine learning, education, C++ Abstract AbstractThis study explores integrating Large Language Models (LLMs) into computer science education by examining undergraduate interactions with a GPT-4-based chatbot during a formative assignment in an introductory course. We aim to delineate optimal help-seeking behaviors and ascertain if effective problem-navigating strategies correlate with improved learning outcomes. Using descriptive statistics and Structural Topic Modeling (STM), we analyze the types of questions posed and their connection to task completion success. Findings reveal a positive association between the number of attempts and help requests, indicating more engaged students seek assistance. STM analysis shows high-ability students address abstract concepts early, while lower-ability students focus on syntax-related issues. These insights underscore the need to evaluate interaction behaviors to optimize chatbot use in education, leading to proposed guidelines to enhance chatbot utilization, promoting responsible use and maximizing educational advantages. pdfA Flexible Educational Simulation Model To Study Autonomous Last-Mile Logistics Berry Gerrits and Martijn Mes (University of Twente) Tags: discrete-event, education, logistics, transportation, Siemens Tecnomatix Plant Simulation Abstract AbstractThis paper presents an open-source discrete-event simulation model to support simulation education of business and engineering students using an appealing problem setting of automated transport for last-mile logistics. The open-source nature of our approach facilitates customization and further development, fostering collaborative research and education initiatives. Within this context, we develop a flexible approach that utilizes OpenStreetMap to create engaging models for simulation education. The basic functionalities corresponding with last-mile logistics operations are included in the model and allow students to quickly experiment with novel logistics concepts, fleet configurations, and planning algorithms to evaluate operational efficiency, environmental consequences (e.g., carbon emissions), and societal factors (e.g., livability). To illustrate our approach, we focus on conceptualizing and modeling the campus of the University of Twente in Plant Simulation as a final assignment for a graduate-level simulation course. We present several options for the model's use for educational purposes. pdf Simulation in EducationTechnology-Enhanced Simulation Education Chair: Madison L Evans (Simio LLC, Auburn University)
Input Distribution Modeling Using the Kotlin Simulation Library Manuel Rossetti, Farid Hashemian, Maryam Aghamohammadghasem, Danh Phan, and Nasim Mousavi (University of Arkansas) Tags: input modeling, education, open source, student Abstract AbstractInput distribution modeling is an important aspect of stochastic simulation models. This paper provides an overview of the functionality of the discrete and continuous distribution modeling capabilities provided by the Kotlin Simulation Library (KSL). The library provides an API framework for defining distribution selection metrics, combining the metrics into an overall score, and recommending a distribution based on the scoring models. In addition, the library provides capabilities for performing goodness of fit statistical tests and other model fit diagnosis analysis. The paper provides context for how users can use existing capabilities and extend those capabilities to their needs. Examples are provided to illustrate the library. pdfInvestigating the Use of Generative AI in M&S Education James F. Leathrum, Yuzhong Shen, and Masha Sosonkina (Old Dominion University) Tags: discrete-event, machine learning, education Abstract AbstractLarge Language Models (LLMs) are rapidly creating a place for themselves in society. There are numerous reports, both good and bad, of their use in business, academia, government and society. While some organizations are trying to limit, or eliminate, their use, it appears that it is inevitable they will become a common “tool”. In education, there is a fear that students will not acquire critical thinking in the future, but we argue that LLMs will become a tool to assist students with critical thinking, giving guidance, feedback, and assessment. This paper investigates how the current state of LLMs can be integrated into modeling and simulation (M&S) education. Example cases for modeling and simulation development are presented showing how an LLM can assist M&S design and education in anticipation of LLMs becoming a common tool for M&S practitioners. Current limitations are also highlighted, and where possible, short-term solutions are proposed. pdfIncorporating Video Modules in Simulation Education Jeffrey Smith (Simio LLC) and Madison Evans (Auburn University) Abstract AbstractThis paper describes the use of video modules to improve learning opportunities in university courses and commercial training courses on discrete-event simulation and data analytics. The paper gives the chronology and describes the resulting work products from our use of video modules since 2011. It also describes our methodologies and lessons learned for selecting topics for video modules, creating the individual videos, and creating/maintaining the hosting website. We also provide the perspective of a user not involved in the development. While this paper does not provide a rigorous scientific analysis of the methods or resulting impact on learning outcomes, we hope that anyone interested in developing and using video modules to augment their classes will find the general information useful. pdf Environment Sustainability and Resilience, Simulation in EducationDigital Twins in Education and Energy Chair: Daniel Jun Chung Hii (Kajima Corporation, Kajima Technical Research Institute Singapore)
Picking System Digital Twin: A Lab-based Case Study Vicky Sipasseuth and Michael E. Kuhl (Rochester Institute of Technology) Abstract AbstractDigital Twins have become a focal point of simulation modeling and analysis in recent years and seems to be gaining momentum. As such, there is a need to more fully integrate practical digital twin modeling and analysis into systems simulation courses. In this paper, we present a lab-based digital twin case study of a pick to light picking system. The digital twin design framework and methodology utilize a simulation model that acts a virtual near real-time representation of a physical picking system. The digital twin can be used to analyze picking system configurations such as alternative picking policies, inventory policies, worker allocation to picking zones, and related decisions. pdfNovel Methods for Teaching Simulation: Strengthening Digital Twin Development Amel Jaoua (National Engineering School of Tunis), Elisa Negri (Politecnico di Milano), and Mehdi Jaoua and Nabil Benkirane (Independent Researcher) Abstract AbstractThis article proposes new methods for teaching Discrete Event Simulation (DES) in manufacturing systems.
Over the last four decades, numerous books have offered methods for teaching DES as what-if analysis tools for addressing stochastic problems. However, the emergence of the Digital Twin (DT) concept has posed challenges for such traditionally designed DES models. These models often struggle to evolve effectively into Real-Time Simulators (RTS). RTS are connected DES models embedded as kernels in the DT framework and synchronized based on real-time sensor data streams. Thus, the objective of this work is to introduce teaching methods that provide deeper insights into designing the needed high-fidelity DES models capable of evolving into RTS. It also illustrates how the Immersive Learning approach is employed to immerse students in a manufacturing environment through Virtual Reality (VR) experiences, allowing them to grasp key concepts such as granularity levels and synchronization challenges in deploying a DT pdfTowards The Digital Twinning and Simulation of a Smart Building for Well-Being Daniel Jun Chung Hii and Takamasa Hasama (Kajima Technical Research Institute, Singapore, Kajima Corporation) Tags: agent-based, digital twin, project management, AnyLogic, industry Abstract AbstractSmart cities and buildings are enabled by the Internet of Things (IoT) sensor infrastructures integration and monitoring in the current Industry 5.0 era. Environment sensing, people and robots counting enable the simulations of both digital twin and agent-based modelling (ABM). This enables the understanding of the impact between the built environment and humans. The movement analysis allows planning and design of spaces as well as utilizing machine learning methods to train the trajectories for space usage prediction. Huma behavior and social interaction comprehension is important to generate people friendly spaces. The GEAR is a smart, green, and WELL certified building embedded with sensors as a living lab for research and development. The diverse workspace layouts create an ideal testbed to study human and built environment interactions. This pursuit is to achieve more sustainable and better designed spaces for human well-being in the fast-changing world. pdf
Track Coordinator - Simulation Optimization: David J. Eckman (Texas A&M University), Siyang Gao (City University of Hong Kong) Simulation OptimizationStochastic Optimization Chair: Raghu Bollapragada (The University of Texas at Austin)
A Smoothed Augmented Lagrangian Framework For Convex Optimization With Nonsmooth Stochastic Constraints Peixuan Zhang (Pennsylvania State University) and Uday Shanbhag (University of Michigan, Ann Arbor) Tags: optimization, sampling Abstract AbstractWe consider a convex stochastic optimization problem where both the objective and constraints are convex but possibly complicated by uncertainty and nonsmoothness. We present a smoothed sampling-enabled augmented Lagrangian (AL) framework that relies on inexact solutions to the AL subproblem, obtainable via a stochastic approximation framework. Under a constant penalty parameter, the dual suboptimality is shown to diminish at a sublinear rate while primal infeasibility and suboptimality both diminish at a slower sublinear rate. pdfImproving Dimension Dependence in Complexity Guarantees for Zeroth-order Methods via Exponentially-shifted Gaussian Smoothing Mingrui Wang, Prakash Chakraborty, and Uday Shanbhag (Pennsylvania State University) Abstract AbstractSmoothing-enabled zeroth-order (ZO) methods for nonsmooth convex stochastic optimization have assumed increasing relevance. A shortcoming of such schemes is the dimension dependence in the complexity guarantees, a concern that impedes truly large-scale implementations. We develop a novel exponentially- shifted Gaussian smoothing (esGS) gradient estimator by leveraging a simple change-of-variable argument. The moment bounds of the (esGS) estimator are characterized by a muted dependence on dimension. When the (esGS) estimator is incorporated within a ZO framework, the resulting iteration complexity bounds are reduced to O(nε−2) from O(n2ε−2), the latter being the best available for the existing two-point estimator with Gaussian smoothing. More specifically, we provide asymptotic and rate statements for nonsmooth convex and strongly convex regimes. Preliminary comparisons with existing schemes appear promising. pdfCentral Finite-Difference Based Gradient Estimation Methods For Stochastic Optimization Raghu Bollapragada and Cem Karamanli (The University of Texas at Austin) and Stefan Wild (Lawrence Berkeley National Laboratory) Abstract AbstractThis paper presents an algorithmic framework for solving unconstrained stochastic optimization problems using only stochastic function evaluations. We employ central finite-difference based gradient estimation methods to approximate the gradients and dynamically control the accuracy of these approximations by adjusting the sample sizes used in stochastic realizations. Extending the analysis of Bollapragada et al. (2024) to nonconvex functions and central finite-difference based methods, we analyze the theoretical properties of the proposed framework. Our analysis yields sublinear convergence results to the neighborhood of the solution, and establishes the optimal worst-case iteration complexity and sample complexity for each gradient estimation method. Finally, we demonstrate the performance of the proposed framework and the quality of the gradient estimation methods through numerical experiments on nonlinear least squares problems. pdf Simulation OptimizationFeature Selection and Multi-Attribute Optimization Chair: Raul Astudillo (California Institute of Technology)
Group COMBSS: Group Selection via Continuous Optimization Anant Mathur and Sarat Moka (UNSW Sydney), Benoit Liquet (Macquarie University), and Zdravko Botev (UNSW Sydney) Tags: machine learning, optimization, ranking and selection, student Abstract AbstractWe present a new optimization method for the group selection problem in linear regression. In this problem, predictors are assumed to have a natural group structure and the goal is to select a small set of groups that best fits the response. The incorporation of group structure in a design matrix is a key factor in obtaining better estimators and identifying associations between response and predictors. Such a discrete constrained problem is well-known to be hard, particularly in high dimensional settings where the number of predictors is much larger than the number of observations. We propose to tackle this problem by framing the underlying discrete binary constrained problem into an unconstrained continuous optimization problem. The performance of our proposed approach is compared to state-of-the-art variable selection strategies on simulated data sets. We illustrate the effectiveness of our approach on a genetic dataset to identify grouping of markers across chromosomes. pdfRobust Screening and Partitioning for Feature Selection: A Binary Simulation Optimization Problem Ethan Houser and Sara Shashaani (North Carolina State University) Tags: big data, data driven, machine learning, metamodeling, optimization Abstract AbstractFeature selection is the process of eliminating irrelevant or redundant covariates in a dataset to construct generalizable interpretable predictions. It is an NP-hard combinatorial problem with only heuristic or model-dependent greedy solution methods. To improve robustness, previous work proposed a zero-one simulation optimization by independently replicating model construction and out-of-sample error calculation. Genetic algorithms were used to solve feature selection showing improved robustness compared to existing techniques, albeit being slow, sensitive to hyperparameters, and lacking convergence guarantees. We propose a stochastic binary search method based on nested partitioning informed by an initial rapid screening phase. With new mechanisms for sampling, we propose steps to efficiently reduce the feature space and obtain an intelligent partitioning scheme. Our experiments show that our proposed algorithm finds fewer irrelevant features, is less sensitive to the data size, and is faster than competing methods. pdfMulti-Attribute Optimization under Preference Uncertainty Bhavik Shah (Google), Raul Astudillo (California Institute of Technology), and Peter Frazier (Cornell University) Abstract AbstractWe introduce multi-attribute optimization under preference uncertainty, a novel approach for optimization-based decision support when the decision-maker's preferences are uncertain. Here, each feasible design is associated with a vector of attributes, which is in turn assigned a utility by the decision-maker's utility function. This utility function has been incompletely estimated, and its remaining uncertainty is quantified by a Bayesian probability distribution. We develop an optimization-based formulation to generate a menu of diverse solutions, among which the decision-maker is expected to find a high-utility solution. The resulting optimization problem is challenging to solve in general, but we show that it can be approximately solved efficiently when the attributes and utility function are linear by reformulating it as a mixed-integer linear program, and using sample average approximation and submodular maximization. We also propose a posterior sampling approach that only requires optimizing individual samples of the user's utility function, supporting fast computation. pdf Simulation OptimizationOnline Optimization Chair: Haidong Li (Peking University)
Optimal Stopping for Clinical Trials with Economic Costs: A Simulation-Based Approach Amandeep Chanda, Michael C. Fu, and Eric V. Slud (University of Maryland) Abstract AbstractWe consider the problem of designing an early stopping clinical trial investigating the efficacy of a medical intervention against the available standard of care. The standard approach is to determine a stopping rule minimizing the expected number of patients required, subject to error rate constraints, not considering costs explicitly depending on the magnitude of the treatment effect. In this paper we formulate an optimal stopping problem for clinical trials with instantaneous continuous response, the objective being minimizing an overall risk comprising loss functions accounting for costs involving the treatment effect that might model ethical and economic costs. To solve the optimization problem we propose a feasible directions simulation-based algorithm requiring new stochastic gradient estimators which we derive using Smoothed Perturbation Analysis. We conduct simulation experiments to test the effectiveness of the simulation optimization algorithm and to obtain insights on the effects of the various risk factors on the optimal solution. pdfReliable Online Decision Making with Covariates Heng Luo and Zhiyang Liang (Fudan University) and L. Jeff Hong (Fudan University, School of Management) Abstract AbstractIn online decision makings, the challenge often lies in finding the optimal solution quickly based on the covariates observed in real time. In this paper we propose to use the framework of offline simulation and online application to develop algorithms that are capable of solving online simulation optimization problems. In the offline stage, the algorithms solve the optimization problems many times based on different values of the covariates and build predictive models of the optimal solution with respect to the covariates. In the online stage, once the covariates are observed, the optimal solution may be quickly determined by the predictive model. We focus on online strongly convex simulation optimization problems and propose to use different algorithms to construct the predictive models. We derive the rate of convergence of the optimality gaps of the proposed algorithms, and develop a finite-sample statistical measure of the optimality gap when these algorithms are used. pdfDynamic Assortment Optimization in Live-Streaming Sales Zishi Zhang (Peking University), Haidong Li and Ying Liu (University of Chinese Academy of Sciences), and Yijie Peng (Peking University) Tags: data driven, optimization Abstract AbstractThis paper explores the dynamic assortment optimization problem in the context of live-streaming sales. In response to the dynamic characteristics of live-streaming e-commerce, we propose a new choice model that extends the traditional Multinomial Logit model into the continuous time domain. This novel model enables the decoupling of parameter estimation for different products and facilitates the convenient updating of posterior parameters at any time. Finally, we introduce a myopic optimization algorithm based on Thompson Sampling. This algorithm effectively balances exploration and exploitation, captures fluctuations in the number of audiences, and demonstrates superior numerical performance. pdf Simulation OptimizationStochastic Programming Chair: Zeyu Liu (The University of Tennessee, Knoxville; University of Tennessee)
An SDDP Algorithm for Multistage Stochastic Programs with Decision-dependent Uncertainty Nazlican Arslan (Northwestern University), Oscar Dowson (Dowson Farms), and David Morton (Northwestern University) Abstract AbstractStochastic programming provides mathematical models and algorithms for optimizing decisions under uncertainty. In formulating a stochastic program we typically assume that the probability distributions governing the random parameters are independent of the problem's decisions. Here, we study a multistage stochastic program with decision-dependent uncertainty. At each stage, binary decisions choose from a set of probability distributions and can increase the likelihood of favorable outcomes at a certain cost. We develop a variant of the stochastic dual dynamic programming (SDDP) algorithm to approximately solve this class of problems, using a convex relaxation of the algorithm's subproblems. This allows us to handle a type of large-scale multistage decision-dependent stochastic program, which was previously inaccessible. We provide computational results for a multi-product newsvendor problem with binary marketing options. pdfData-Driven Solutions and Uncertainty Quantification for Multistage Stochastic Optimization Yunhao Yan and Henry Lam (Columbia University) Abstract AbstractMultistage stochastic optimization problems appear commonly in various disciplines in operations management and system control. While computational methods have been studied, data-driven integration and uncertainty quantification for these problems appear less developed. In this paper, we propose a simple data-driven solution technique and uncertainty quantification method for these problems, based on natural multisample generalizations of the well-known sample average approximation and the so-called single replication procedure. Under the assumptions of stagewise independence and the use of parametrized policies, we justify statistical consistency and a coverage guarantee on bounding the optimality gap using our approaches. Our developments entail the establishment of several new statistical properties of the so-called multisample U-process that closely connect to multistage stochastic optimization. pdfA Simulation-Infused Optimization Approach for Decomposing Nonlinear Systems Zeyu Liu (West Virginia University) Tags: agent-based, optimization, simheuristic, transportation, AnyLogic Abstract AbstractWith the rapid advancements in modern computing technologies, simulation has been increasingly adopted to model complex real-world systems. Yet, such digital computational power remains underutilized in decision-making tasks, mainly due to the difficulties in integrating gray or black box simulation models with algebraic optimization. In this study, we address this by proposing a novel and generic simulation-infused optimization approach. Through decomposition, nonlinear subsystems are surrogated by simulation and approximated by cutting planes in a row generation algorithm for holistic optimization. Exploration and exploitation are introduced to the row generation algorithm via a novel parametric method. The proposed simulation-infused decomposition approach is applied to a multi-depot vehicle routing problem. Proof-of-concept experiments are conducted to validate the performance against benchmarks, showing drastic reductions in computational time and improvements upon conventional optimization methods when real-world nonlinear systems are involved. pdf Simulation OptimizationRanking & Selection 1 Chair: Dohyun Ahn (The Chinese University of Hong Kong)
Rate-Optimal Budget Allocation for the Probability of Good Selection Best Contributed Theoretical Paper - Finalist Taeho Kim (HKUST) and David Eckman (Texas A&M University) Tags: Monte Carlo, ranking and selection, sampling Abstract AbstractThis paper studies the allocation of simulation effort in a ranking-and-selection (R&S) problem with the goal of selecting a system whose performance is within a given tolerance of the best. We apply large-deviations theory to derive an optimal allocation for maximizing the rate at which the so-called probability of good selection (PGS) asymptotically approaches one, assuming that systems' output distributions are known. An interesting property of the optimal allocation is that some good systems may receive a sampling ratio of zero. We demonstrate through numerical experiments that this property leads to serious practical consequences, specifically when designing sequential R&S algorithms. In particular, we observe that the convergence and even consistency of existing algorithms can be negatively impacted when they are naively modified for the PGS goal. We offer empirical evidence of these challenges and a preliminary exploration of a simple potential correction. pdfThompson Sampling Procedures for Ranking and Selection with Pairwise Comparisons and Binary Outcomes Dongyang Li (National University of Singapore), Enver Yucesan (INSEAD), and Chun-Hung Chen (George Mason University) Tags: discrete-event, machine learning, ranking and selection, Matlab, student Abstract AbstractWe consider ranking and selection (R&S) problems where the performance of competing designs or alternatives can only be assessed by evaluating two alternatives simultaneously with a binary outcome; the true performance of an alternative is quantified by its average probability of outperforming other alternatives. We discuss the challenges associated with applying conventional R&S techniques to this particular setting and propose heuristic algorithms based on Thompson sampling to overcome those difficulties. Through a series of simple numerical experiments, we assess the effectiveness of these heuristics and highlight open research questions. pdfBest-Arm Identification with High-Dimensional Features Dohyun Ahn (The Chinese University of Hong Kong), Dongwook Shin (HKUST Business School), and Lewen Zheng (The Chinese University of Hong Kong) Abstract AbstractGiven a collection of stochastic systems (or arms), we focus on the problem of identifying the best system with the highest expected payoff by learning the unknown statistical characteristics of system payoffs via sequential sampling. The distributions of the system payoffs are governed by a linear model consisting of high-dimensional system features that are fixed and known, as well as unknown parameters that are common across all systems. However, due to the high dimensionality, the ordinary least-squares estimator for the unknown parameters exhibits a large variance, leading to significant errors in identifying the best system. Based on the theory of large deviations, we show that this performance degradation can be effectively addressed by using the LASSO estimator with a judiciously chosen regularization parameter. Furthermore, we provide a practical guideline for selecting the regularization parameter and design a dynamic sampling policy that improves the performance in identifying the best system. pdf Simulation OptimizationSimulation for Applied Probability and Optimization Chair: Gongbo Zhang (Peking University)
Flood Scenario Generation using the NORTA Model Ashutosh Shukla, John Hasenbein, and Erhan Kutanoglu (University of Texas at Austin) Abstract AbstractSeveral stochastic programming models have been developed for critical infrastructure's resilience decision-making to extreme flood events. Generating flood scenarios for such models requires running advanced flood models on a sophisticated computing infrastructure for different parameterizations (for example, different hurricane intensity levels, tracks, etc.), which may not always be practical. To address this issue, in this study, we propose a Normal-to-Anything (NORTA) model-based flood scenario generation scheme, which requires significantly fewer computing resources. The scenarios we generate using the proposed approach preserve correlation in flood height at locations of interest, in our case, the power transmission grid's substation locations. We demonstrate our approach's efficacy with a case study using a synthetic power grid with statistical similarities with the actual Texas grid and the flood maps developed by the National Atmospheric and Oceanic Administration that represent the storm-surge risk in Texas. pdfComparative Analysis of Distance Metrics for Distributionally Robust Optimization in Queuing Systems: Wasserstein vs. Kingman Hyung-Khee Eun and Sara Shashaani (North Carolina State University) and Russell R. Barton (The Pennsylvania State University) Abstract AbstractThis study examines the effectiveness of different metrics in constructing ambiguity sets for Distributionally Robust Optimization (DRO). Two main approaches for building ambiguity sets are the moment- and the discrepancy-based approaches. The latter is more widely adopted because it incorporates a broader range of distributional information beyond moments. Among discrepancy-based metrics, the Wasserstein distance is often preferred for its advantageous properties over ϕ-divergence. In this study, we propose a moment-based Kingman distance, an approximation of mean waiting time in G/G/1 queues, to determine the ambiguity set. We demonstrate that the Kingman distance provides a straightforward and efficient method for identifying worst-case scenarios for simple queue settings. In contrast, the Wasserstein distance requires exhaustive exploration of the entire ambiguity set to pinpoint the worst-case distributions. These findings suggest that the Kingman distance could offer a practical and effective alternative for DRO applications in some cases. pdfSolving Mixed Integer Linear Programs by Monte Carlo Tree Search Gongbo Zhang and Yijie Peng (Peking University) Abstract AbstractMixed Integer Linear Programs (MILPs) are powerful tools for modeling and solving combinatorial optimization problems. Solving an MILP is NP-hard due to the integrality requirement, and the branch and bound (B&B) algorithm is a widely used exact solution method. In this work, we explore the use of Monte Carlo Tree Search (MCTS) to guide the search within the space composed of branching candidate variables, aiming to efficiently find the optimal solution (if it exists) for an MILP. We adapt the Asymptotically Optimal Allocation for Trees (AOAT) algorithm, a recently proposed MCTS approach in the simulation and optimization field, for solving MILPs. Numerical results demonstrate the potential benefits of the proposed method. pdf Simulation OptimizationRanking & Selection 2 Chair: Yuwei Zhou (Georgia Institute of Technology)
Selection of the Best System with an Optimized Continuous Variable Yuhao Wang, Seong-Hee Kim, and Enlu Zhou (Georgia Institute of Technology) Tags: optimization, ranking and selection, student Abstract AbstractIn this paper, we consider a generalized ranking and selection problem, where each system's performance depends on a continuous decision variable necessitating optimization. We focus on a fixed confidence formulation, aiming to find a near-optimal system alongside its corresponding decision variable, meeting specified error tolerances with prescribed confidence levels. To achieve this, we introduce a multi-stage optimization-pruning framework. This framework alternates between optimizing systems using stochastic gradient descent and evaluating their performance through feasibility checks. Our proposed approach offers computational savings by identifying sub-optimal systems before investing effort in optimizing them to the desired accuracy.
We demonstrate its efficacy through a numerical study. pdfSelecting the Safest Design in Rare Event Settings Best Contributed Theoretical Paper - Finalist Anirban Bhattacharjee and Sandeep Kumar Juneja (Ashoka University) Tags: optimization, ranking and selection, rare events, R, student Abstract AbstractFinitely many simulatable designs are given and we aim to identify the safest one, i.e., that with the smallest probability of catastrophic failure. We consider this problem in a ranking and selection or equivalently the multi-armed-bandit best-arm-identification framework where we aim to identify the safest design/arm with the lowest probability of failure with probability at least $1-\delta$ for a pre-specified, for a pre-specified, small $\delta$. To illustrate the rarity structure crisply, we study the problem in an asymptotic regime where the design failure probabilities shrink to zero at varying rates.
In this set-up, we consider the well known information theoretic lower bound on sample complexity, and identify the
simplifications that arise due to the rarity framework.
A key insight is that sample complexity
is governed by the rarity of the second safest design.
Lower bound
guides and speeds up proposed algorithm, that is intuitive and asymptotically
matches the lower bound. pdfFinding Feasible Systems in the Presence of a Probability Constraint Taehoon Kim, Sigrun Andradottir, and Seong-Hee Kim (Georgia Institute of Technology) and Yuwei Zhou (Indiana University) Tags: optimization, ranking and selection Abstract AbstractWe consider the problem of determining feasible systems among a finite set of simulated alternatives with respect to a probability constraint, where observations from stochastic simulations are Bernoulli distributed. Most statistically valid procedures for feasibility determination consider constraints on the means of normally distributed observations. When observations are Bernoulli distributed, one can still use the existing procedures by treating batch means of Bernoulli observations as basic observations. However, achieving approximate normality may require a large batch size, which can lead to unnecessary waste of observations in reaching a decision. This paper proposes a procedure that utilizes Bernoulli-distributed observations to perform feasibility checks. We demonstrate that when the observations are Bernoulli distributed, our procedure outperforms an existing feasibility determination procedure that was developed for a constraint on normally distributed observations. pdf Simulation OptimizationSimheuristics Chair: Johanna Wiesflecker (The University of Edinburgh)
Intelligent Layout Reconfiguration for Reconfigurable Assembly System: A Genetic Algorithm Approach Jisoo Park (Sungkyunkwan University), Seog-Chan Oh (General Motors Research and Development), Whan Lee and Changha Lee (Sungkyunkwan University), Hua-tzu Fan and Jorge Arinez (General Motors Research and Development), and Sang Do Noh (Sungkyunkwan University) Tags: optimization, manufacturing, Python, Siemens Tecnomatix Plant Simulation, student Abstract AbstractThe ever-evolving automotive industry landscape, driven by shifting customer demands, necessitates flexible manufacturing solutions. Reconfigurable Manufacturing Systems (RMS), integrating modular facilities and Automated Mobile Robots (AMRs), emerge as pivotal alternatives to inflexible dedicated conveyor systems. This study delves into optimizing layout reconfiguration within automotive assembly, with a specific focus on the Reconfigurable Assembly System (RAS) inheriting the traits of RMS. We are focused on addressing scenarios characterized by frequent production schedule changes, necessitating frequent layout reconfiguration. Our approach prioritizes maintaining high area utilization without compromising throughput. In this study, we modified the NSGA-Ⅱ algorithm, one of advanced Genetic Algorithms (GA) and proposed a layout reconfiguration algorithm to concurrently optimize two key objectives: (1) area utilization and (2) throughput, crucial facets of layout optimization. The proposed algorithm, integrated with discrete event simulation models spanning six layout scenarios, demonstrates significant enhancement in area utilization without compromising throughput integrity, by confirmed simulation studies. pdfMatrix Assembly System Scheduling Optimization In Automotive Manufacturing: A Deep Q-Network Approach Whan Lee (Sungkyunkwan University), Seog-Chan Oh (General Motors Research and Development), Jisoo Park and Changha Lee (Sungkyunkwan University), Hua-tzu Fan and Jorge Arinez (General Motors Research and Development), and Sejin An and Sang Do Noh (Sungkyunkwan University) Tags: optimization, system dynamics, Python, Siemens Tecnomatix Plant Simulation, industry Abstract AbstractIn response to the demand diversification in automobile production, traditional manufacturing processes are transitioning towards more flexible systems with dynamic scheduling methods. The Matrix System (MS) stands out for its utilization of Autonomous Mobile Robots (AMRs) and multi-purposed workstations, enabling a dynamic production environment. Each AMR is tasked with transporting a partially assembled vehicle through multiple workstations until final assembly, adhering to predefined precedence orders. However, determining operation schedules amidst the complexity of multi-model systems poses a significant challenge in minimizing manufacturing time. To address this, we formalize the problem into a Markov Decision Process (MDP) and propose a Deep Q-Network (DQN) based scheduling optimization algorithm for the Vehicles Production Scheduling (VPS) problem. Our approach utilizes discrete event simulation to assess candidate actions suggested by the DQN, aiming to derive an optimal policy. This paper validated the proposed algorithm by comparing with various dispatching rules. pdf
Simheuristics for Strategic Workforce Planning at a Busy Airport Best Contributed Applied Paper Johanna Wiesflecker, Maurizio Tomasella, and Thomas W. Archibald (The University of Edinburgh) Tags: Monte Carlo, simheuristic, aviation, Python, student Abstract AbstractAirport demand frequently exceeds capacity. An airport's capacity bottleneck is often located in its runway system. Other times, it is elsewhere, including the terminal facilities. At one of the busiest UK airports, the subject of this study, the bottleneck is located in the passenger security hall, where staff are employed directly by the airport operator. This airport plans to restructure the current workforce's overall size and types of contracts. Our work proposes a Simheuristic approach that utilizes current workforce data and demand forecasts for the upcoming season to adjust the contractual configurations systematically. We employ simulation to test the expected costs (vs flexibility needs) of the likely rosters that each contractual configuration will allow. Using the simulation results, the algorithm aims to identify the optimal contractual configuration to minimize costs while ensuring the adaptability required to address unforeseen changes in the flight schedule as each season is underway. pdf Data Science for Simulation, Simulation as Digital Twin, Simulation OptimizationCross-Track Session 2: Methods Chair: Jinbo Zhao (Texas A&M University)
Simulation Optimization with Non-Stationary Streaming Input Data Songhao Wang (Southern University of Science and Technology), Haowei Wang (Rice-Rick Digitalization PTE. Ltd.), Jianglin Xia (Southern University of Science and Technology), and Xiuqin Xu (McKinsey & Company) Tags: data driven, input modeling, optimization, student Abstract AbstractSimulation optimization has become an emerging tool to design and analysis of real-world systems. In stochastic simulation, input distribution is a main driving force to account for system randomness. Most existing works on input modeling focus on stationary input distributions. In reality, however, input distributions could experience sudden disruptive changes due to external factors. In this work, we consider input modeling through non-stationary streaming input data, where the input data arrive sequentially across different decision stages. Both the parameters of the input distributions and the disruptive change points are unknown. We use a Markov Switching Model to estimate the non-stationary input distributions, and design a metamodel-based approach to solve the following optimization problem. The proposed metamodel and optimization algorithm can utilize the simulation results from all the past stages. A numerical study on an inventory system shows that our algorithm can solve the problem more efficiently compared to common approaches. pdfCatalyzing Intelligent Logistics System Simulation with Data-driven Decision Strategies Shiqi Hao, Yang Liu, Yu Wang, Xiaopeng Huang, Muchuan Zhao, and Xiaotian Zhuang (JD Logistics, Inc.) Tags: machine learning, logistics, Java, Python, industry Abstract AbstractMachine learning is becoming an important technique in modern simulation systems due to the strong capability on capturing the random, complex, and dynamic features of the physical world. Based on these advantages, it has been employed as a powerful tool that enables the intelligent simulation of large-scale logistics systems in a highly efficient manner. Inspired by these applications, this work presents a new paradigm, where machine learning is utilized to generate data-driven decision strategies to accurately emulate the practical operations in logistics systems, and improve the simulation accuracy. Compared with existing approaches, the proposed method is also characterized by the high flexibility and transparency. Consequently, it can adapt to a large variety of logistics system architectures, and catch adequate details of system dynamics. Experiments have been conducted based on the simulation of large-scale real-world logistics systems, where the proposed method demonstrates superior accuracy on both strategy learning and simulation. pdfA Framework for Digital Twin Collaboration Zhengchang Hua (Southern University of Science and Technology, University of Leeds); Karim Djemame (University of Leeds); Nikos Tziritas (University of Thessaly); and Georgios Theodoropoulos (Southern University of Science and Technology) Tags: agent-based, distributed, complex systems, digital twin, student Abstract AbstractDigital Twins (DTs) have emerged as a powerful tool for modeling Large Complex Systems (LCSs). Their strength lies in the detailed virtual models that enable accurate predictions, presenting challenges in traditionally centralized approaches due to the immense scale and decentralized ownership of LCSs. This paper proposes a framework that leverages the prevalence of individual DTs within LCSs. By facilitating the exchange of decisions and predictions, this framework fosters collaboration among autonomous DTs, enhancing performance. Additionally, a trust-based mechanism is introduced to improve system robustness against poor decision-making within the collaborative network. The framework's effectiveness is demonstrated in a virtual power plant (VPP) scenario. The evaluation results confirm the system’s objectives across various test cases and show scalability for large deployments. pdf Simulation OptimizationEmpirical Studies Chair: Shane G. Henderson (Cornell University)
A Preliminary Study on Accelerating Simulation Optimization with GPU Implementation Jinghai He, Haoyu Liu, Yuhang Wu, Zeyu Zheng, and Tingyu Zhu (University of California, Berkeley) Abstract AbstractWe provide a preliminary study on utilizing GPU (Graphics Processing Unit) to accelerate computation for three simulation optimization tasks with either first-order or second-order algorithms. Compared to the implementation using only CPU (Central Processing Unit), the GPU implementation benefits from computational advantages of parallel processing for large-scale matrices and vectors operations. Numerical experiments demonstrate such computational advantage of utilizing GPU implementation in simulation optimization, and show that such advantage comparatively further increase as the problem scale increases. pdfRepeatedly Solving Similar Simulation-Optimization Problems: Insights from Data Farming Nicole Felice and Sara Shashaani (North Carolina State University), David Eckman (Texas A&M University), and Susan Sanchez (Naval Postgraduate School) Abstract AbstractWe study a setting in which a decision maker solves a series of similar simulation-optimization problems, where parameters of the problem vary over time to reflect the most up-to-date conditions. For example, in a generalized newsvendor problem, the newsvendor must choose how much of each raw material to order each day in response to the latest prices for procuring and salvaging raw materials. At the beginning of each day, a simulation-optimization solver is used to recommend a solution for that day's problem. This setting prompts several questions of potential interest: How sensitive are the solver's recommended solution and performance to changes in the problem's parameters? Does the use of a fixed set of solver hyperparameters lead to consistently good performance, or would judiciously tuning these hyperparameters lead to a practically significant improvement? We leverage data farming to provide some preliminary answers and describe additional insights uncovered during the process. pdfEvaluating Solvers for Linearly Constrained Simulation Optimization Natthawut Boonsiriphatthanajaroen and Rongyi He (Cornell University), Litong Liu and Tinghan (Joe) Ye (Georgia Institute of Technology), and Shane G. Henderson (Cornell University) Abstract AbstractLinearly constrained simulation optimization problems are those that include deterministic linear constraints in addition to an objective function that can only be evaluated through simulation. We provide several solvers for linearly constrained simulation optimization that all rely on gradient estimates of the objective function. We compare these solvers on random instances of 4 test problems from SimOpt. pdf Simulation OptimizationBayesian Optimization Chair: Giulia Pedrielli (Arizona State University)
Black-Box Simulation-Optimization with Quantile Constraints: An Inventory Case Study Ebru M. Angun (Galatasaray University) and Jack P. C. Kleijnen (Tilburg University) Tags: metamodeling, optimization, Matlab Abstract AbstractWe apply a recent variant of “efficient global optimization” (EGO). EGO is closely related to Bayesian optimization (BO): both EGO and BO treat the simulation model as a black box, and use a Kriging metamodel or Gaussian process. The recent variant of EGO combines (i) EGO for unconstrained optimization, and (ii) the Karush-Kuhn-Tucker optimality conditions for constrained optimization. EGO sequentially searches for the global optimum. We apply this variant and a benchmark EGO variant to an (s, S) inventory model. We aim to minimize the mean inventory costs—excluding disservice costs—while satisfying a prespecified threshold for the 90%-quantile of the disservice level. Our numerical results imply that the mean inventory costs increase by 2.5% if management is risk-averse instead of risk-neutral—using the mean value. Comparing the two EGO variants shows that these variants do not give significantly different
results, for this application. pdfMulti Agent Rollout for Bayesian Optimization Shyam Sundar Nambiraja and Giulia Pedrielli (Arizona State University) Tags: agent-based, distributed, estimation, optimization, Python Abstract AbstractSolving black-box global optimization problems efficiently across domains remains challenging especially for large scale optimization problems. Bayesian optimization has obtained important success as a black box optimization technique based on surrogates, but it still suffers when applied to large scale heterogeneous landscapes. Recent approaches have proposed non-myopic approximations and partitioning of the input domain into subregions to prioritize regions that capture important areas of the solution space. We propose a Multi Agent Rollout formulation of Bayesian optimization (MAroBO) that partitions the input domain among finite set of agents for distributed sampling. In addition to selecting candidate samples from their respective sub regions, these agents also influence each other in partitioning the sub regions. Consequently, a portion of the function is optimized by these agents which prioritize candidate samples that don’t undermine exploration in favor of a single step greedy exploitation. The efficacy of MAroBO is demonstrated on synthetic test functions. pdf
Uncertainty Quantification and Robust Simulation Track Coordinator - Uncertainty Quantification and Robust Simulation: Ilya Ryzhov (University of Maryland), Wei Xie (Northeastern University) Uncertainty Quantification and Robust SimulationInference and Optimization of Continuous Functions Chair: Qiong Zhang (Clemson University)
Bi-objective Bayesian Optimization with Transformed Additive Gaussian Processes Caroline Kerfonta, Qiong Zhang, and Margaret Wiecek (Clemson University) Abstract AbstractSimulation is often used to optimize the performance of an engineering or scientific process. The input-output relation of a simulation model can be black-box functions and expensive to evaluate. Bayesian optimization, popularly used to improve efficiency in searching for optimal input settings, has been extended to bi-objective optimization problems. However, the common challenge in bi-objective Bayesian optimization is the computational challenge of maximizing the expected hypervolume improvement (EHI) to search for the input point to evaluate. In this paper, we utilize the transformed additive Gaussian process to simplify the objectives to additive functions for each dimension of the input space. Under this model assumption, the maximization of EHI over the entire decision space is reduced to the maximization of the resulting EHI functions for each dimension separately. We demonstrate the performances of the proposed method through numerical comparisons of the bi-objective Bayesian optimization with the traditional Gaussian process model. pdfEnergetic Variational Gaussian Process Regression Lulu Kang (University of Massachusetts Amherst); Yuanxing Cheng (Illinois Institute of Technology); Yiwei Wang (University of California, Riverside); and Chun Liu (Illinois Institute of Technology) Tags: machine learning, optimization, sampling, digital twin, Python Abstract AbstractThe Gaussian process (GP) regression model is a widely employed supervised learning approach. In this paper, we estimate the GP model through variational inference, particularly employing the recently introduced energetic variational inference method by Wang, Chen, Liu, and Kang (2021). Under the GP model assumptions, we derive posterior distributions for its parameters. The energetic variational inference approach bridges the Bayesian sampling and optimization and enables approximation of the posterior distributions and identification of the posterior mode. By incorporating a Gaussian prior on the mean component of the GP model, we also apply shrinkage estimation to the parameters, facilitating variable selection of the mean function. The proposed GP method outperforms some existing software packages on three benchmark examples. pdfPlausible Inference with a Plausible Lipschitz Constant Gregory Keslin, Daniel Apley, and Barry L. Nelson (Northwestern University) Abstract AbstractPlausible inference is a growing body of literature that treats stochastic simulation as a gray box when structural properties of the simulation output performance measures as a function of design, decision or contextual variables are known. Plausible inference exploits these properties to allow the outputs from values of decision variables that have been simulated to provide inference about output performance measures at values of decision variables that have not been simulated; statements about the possible optimality or feasibility are examples. Lipschitz continuity is a structural property of many simulation problems. Unfortunately, the all-important---and essential for plausible inference---Lipschitz constant is rarely known. In this paper we show how to obtain plausible inference with an estimated Lipschitz constant that is also derived by plausible inference reasoning, as well as how to create the experiment design to simulate. pdf Uncertainty Quantification and Robust SimulationInference and Analysis for Network Models Chair: Wei Xie (Northeastern University)
Linear Noise Approximation Assisted Bayesian Inference on Mechanistic Model of Partially Observed Stochastic Reaction Network Wandi Xu and Wei Xie (Northeastern University) Abstract AbstractTo support mechanism online learning and facilitate digital twin development for biomanufacturing processes, this paper develops an efficient Bayesian inference approach for partially observed enzymatic stochastic reaction network (SRN), a fundamental building block of multi-scale bioprocess mechanistic model. To tackle the critical challenges brought by the nonlinear stochastic differential equations (SDEs)-based mechanistic model with partially observed state and having measurement errors, an interpretable Bayesian updating linear noise approximation (LNA) metamodel, incorporating the structure information of the mechanistic model, is proposed to approximate the likelihood of observations. Then, an efficient posterior sampling approach is developed by utilizing the gradients of the derived likelihood to speed up the convergence of MCMC. The empirical study demonstrates that the proposed approach has a promising performance. pdfAdjoint Sensitivity Analysis on Multi-scale Bioprocess Stochastic Reaction Network Keilung Choy and Wei Xie (Northeastern University) Abstract AbstractMotivated by the pressing challenges in the digital twin development for biomanufacturing processes, we introduce an adjoint sensitivity analysis (SA) approach to expedite the learning of mechanistic model parameters. In this paper, we consider enzymatic stochastic reaction networks representing a multi-scale bioprocess mechanistic model that allows us to integrate disparate data from diverse production processes and leverage the information from existing macro-kinetic and genome-scale models. To support forward prediction and backward reasoning, we develop a convergent adjoint SA algorithm studying how the perturbations of model parameters and inputs (e.g., initial state) propagate through enzymatic reaction networks and impact on output trajectory predictions. This SA can provide a sample efficient and interpretable way to assess the sensitivities between inputs and outputs accounting for their causal dependencies. Our empirical study underscores the resilience of these sensitivities and illuminates a deeper comprehension of the regulatory mechanisms behind bioprocess through sensitivities. pdfImportance Sampling of Rare Events for Distribution Networks with Stochastic Loads Mark Christianen (Eindhoven University of Technology), Henry Lam (Columbia University), Maria Vlasiou (University of Twente), and Bert Zwart (CWI) Abstract AbstractDistribution networks are low-voltage electricity grids at the neighborhood level. Within these networks, failures can occur as rare events, triggered by stochastic loads that push voltage levels beyond safe limits. To assess the resilience and reliability of these networks, estimating voltage exceedance probabilities is therefore important. We develop importance-sampling strategies to estimate failure probabilities. We do so using two components. First, we propose a change of measure, using either the Large Deviations Principle and linear power flow equations or the Cross-Entropy method to improve sampling efficiency. Second, we determine feasibility of loads by using our previously developed duality method to overcome the computational complexity of directly solving nonlinear power flow equations using methods such as Newton-Raphson and backward-forward sweep algorithms. Experiments on a IEEE-15 bus network show that this methodology offers a fast and accurate estimation of failure probabilities in distribution networks. pdf Uncertainty Quantification and Robust SimulationEstimation and Uncertainty Quantification in Ranking and Selection Chair: Andres Alban (Frankfurt School of Finance & Management)
Estimating Value of Information Arm Allocation Indices in Contextual Ranking and Selection Problems Andres Alban (Frankfurt School of Finance & Management) and Stephen E. Chick and Spyros I. Zoumpoulis (INSEAD) Abstract AbstractContextual ranking & selection is attracting increasing attention in simulation and other fields. A successful approach to addressing related challenges uses arm allocation indices that compute the Bayesian expected value of information of one-step look-ahead policies. We recall recent work on such indices for linear contextual bandits that take advantage of structural information about the nature of the covariates that describe contexts. Such indices can be computed exactly with a finite number of contexts and no delay in observing outcomes, but may require Monte Carlo simulation otherwise. Our contribution is to describe and quantify the benefits of two variance reduction techniques (conditional Monte Carlo and common random numbers) to estimate such allocation indices for contextual ranking & selection problems when some covariates are continuous or outcomes are observed with delay. We find that both techniques significantly improve estimates and the speed of inference, but conditioning is particularly useful. pdfInput Parameter Uncertainty Quantification with Robust Simulation and Ranking and Selection Lewei Shi and Zhaolin Hu (Tongji University) and Ying Zhong (University of Electronic Science and Technology of China) Tags: input modeling, ranking and selection Abstract AbstractIn this paper, we consider an input parameter uncertainty quantification problem, where the true distributions of the input random factors that drive a simulation model are unknown but belong to some parametric families and we aim to find the worst-case expected performance of the simulation model. We propose a framework to calculate the upper and lower bounds of the worst-case expected performance by adopting robust simulation and ranking and selection methods, respectively. The confidence intervals of the upper and lower bounds are further constructed. The framework can address various parametric families in the input modeling and is flexible in the choice of ranking and selection procedure. Numerical experiments show the effectiveness of the framework. pdf
Track Coordinator - Professional Development: Thomas Berg (The University of Tennessee, Knoxville), Bahar Biller (SAS Institute, Inc) Professional DevelopmentPanel: On The Place of Simulation Modeling and Analysis in the Imagination Age Chair: Bahar Biller (SAS Institute, Inc)
Panel: On The Place of Simulation Modeling and Analysis in the Imagination Age Bahar Biller (SAS Institute, Inc); Jeffrey S. Smith (Simio Software); Renee Thiesing (Promita Consulting); Andreas Tolk (MITRE Corporation); and Enver Yucesan (INSEAD) Abstract AbstractThis session explores the impact of big data analytics and computing on simulation research and applications both in academia and in industry. It focuses on recent technology disruptions and how simulation modeling and analysis would integrate in this new landscape. The goal is to provide simulation researchers and practitioners some insight how to better utilize their existing skillset and understand what new skills to acquire. pdf Professional DevelopmentPanel: Learning from Career Paths of Simulation Practitioners Chair: Bahar Biller (SAS Institute, Inc)
Panel: Learning from Career Paths of Simulation Practitioners Bahar Biller (SAS Institute, Inc); Tom Berg (Tickle College of Engineering); Nelson Alfaro Rivas (MOSIMTEC, LLC); Tzai-Shuen Chen (Bayer); and Rainer Dronzek (Simulation Modeling Services) Abstract AbstractThe goal of this session is to share panelists’ career experiences and allow the audience to draw insights on their own career management. pdf
Track Coordinator - Vendor: Amy Greer (MOSIMTEC, LLC), Edward Williams (PMC) VendorVendor Workshop: Rockwell Automation Climbing Out of the Data Dump: Using Business Intelligence Tools for Visualizing and Analyzing Rockwell Automation’s Arena Simulation Data Melanie Barker and Nathan Ivey (Rockwell Automation Inc.) Abstract AbstractThe primary objective of any discrete event simulation project is to generate meaningful output that informs decision-making. However, interpreting this data effectively often requires visual representations to enhance understanding. In this workshop, we will demonstrate how to harness the features of business intelligence visualization tools to connect to an Arena simulation report database, enabling you to visualize and analyze your simulation results more effectively. In this workshop we will use Microsoft's Power BI, but the techniques apply to your company's BI tool of choice.
We’ll begin by guiding you through the process of replicating the standard Excel-based statistical output report in Power BI and importing custom log files generated by the example model. From there, we’ll explore a range of basic visualization options to help you better interpret and present your simulation outcomes. These techniques should be applicable with other business intelligence tools and other simulation packages as well.
If time allows, we will also introduce some of Arena's animation features, which can be valuable tools for debugging and delivering more impactful presentations. VendorVendor Workshop: Simio LLC, Session 1 Using Simulation to Analyze the Business Impact and Value of DDMRP Chad Smith (Simio LLC) Abstract AbstractChad Smith, co-founder of The Demand Driven Institute who is the global authority on Demand Driven methodology, education, training, certification, and compliance with support from Simio will conduct a detailed workshop to illustrate the impact and value of DDMRP to optimize flow in manufacturing-based supply chains.
Recognizing that a simulation-based Adaptive Process Digital Twin is an ideal technology to take the time-tested DDMRP methodology to the next level, Simio and the Demand Driven Institute began a collaboration that has led to groundbreaking advancements in managing material flow.
This workshop will go through various examples starting from a very basic level all the way to more advanced applications to explain how bringing DDMRP together with a Simio Adaptive Process Digital Twin has resulted in a new and exciting development in the supply chain industry – and an ideal solution for the design, test, optimization, and execution of a Demand Driven Material Requirements Planning process. VendorVendor Workshop: The AnyLogic Company AnyLogic 9 Technology Preview - a Free Online Model Development Environment Andrei Borshchev, Nikolay Churkov, and Ruslan Ibragimov (The AnyLogic Company) Abstract AbstractIn this workshop, we will start with an overview of the AnyLogic ecosystem for simulation – a software suite that supports the full cycle from model development to deployment to operational use of simulations. We will then introduce AnyLogic 9 Technology Preview: a free version of our cloud-based development environment for simulation models. You will get a good doze of AnyLogic 9 user experience: we will build a model, deploy it in AnyLogic Cloud, perform experiments and show you how to work with the new AnyLogic simulation results management tools – all that in the browser, nothing needs to be installed. We will also showcase how this new technology radically simplifies distributed model development and enables instant model delivery to end users. Join us to stay current with the most advanced, innovative, and widely used simulation technology in the world! VendorVendor Workshop: Simio LLC, Session 2 Simio Modeling for New and Prospective Users Jeff Smith (Simio LLC) Abstract AbstractThis workshop is aimed at people who know simulation as a methodology but are new to or have never used Simio. Our focus is on introducing the Simio platform by building a simple model from scratch and then using several pre-built models to demonstrate the major features of the platform. Specific topics include using the Standard Library objects, building custom objects, experimentation using Simio’s ranking and selection and optimization tools, using structured data for data-driven/data-generated modeling, and using Simio’s simulation-based scheduling tools for developing scheduling solutions. We will also discuss the licensing options and learning resources available for academic and commercial users. VendorVendor Workshop: InControl - The Simulation Company Creating Digital Twins for Cencora Distribution Centers Fred Jansma (InControl - The Simulation Company) Abstract AbstractLast year InControl introduced ERS, a new simulation development platform capable of parallel scalable simulation. We will demonstrate the use of this platform in a newly developed simulation application that allows Cencora to perform simulations by simulation and non-simulation experts within the company. This enables Cencora to locally adapt to day-to-day operational changes in their distribution centers around North America. VendorSimulation in Operations Chair: Gerrit Zaayman (Simio LLC)
Unlocking Enhanced Planning and AI Solutions with AutoSched Simulation Samantha Duchscherer and Madhu Mamillapalli (Applied Materials) Abstract AbstractThe SmartFactory Simulation AutoSched product has earned a strong reputation as a highly accurate and efficient discrete event simulator in the semiconductor industry for more than three decades. Applied is dedicated to evolving this core technology, starting with the implementation of a tactical planning solution called Production Control. By leveraging AutoSched simulation, Production Control is able to make accurate predictions about lot completion times and validate factory planning output, leading to detailed capacity planning, faster order updates, improved machine utilization, and streamlined WIP management. Additionally, AutoSched plays a crucial role in driving AI initiatives. The integration of AutoSched simulation models with our SmartFactory AI Platform enables customers to develop the necessary training data for building advanced AI models. Through customer use case stories and illustrated examples, discover how the integration of AutoSched into planning and AI solutions has the potential to revolutionize manufacturing operations and drive efficiency. pdfImplementing DDMRP in a Bottling Plant Using Simulation Chad Smith (Demand Driven Institute) and Gerrit Zaayman and Christine Watson (Simio LLC) Abstract AbstractThis presentation will explain how bringing DDMRP together with a Simio Adaptive Process Digital Twin has resulted in a new and exciting development in the supply chain industry – and an ideal solution for the design, test, optimization, and execution of a Demand Driven Material Requirements Planning process. The presentation will cover the implementation of DDMRP for all relevant ranges to include the Demand Driven Operating Model (operational range), Demand Driven S&OP (tactical range) and Adaptive S&OP (strategical range) for a bottling supply chain. pdf VendorDigital Twins Chair: Alan (Yu) Zhang (Moffatt & Nichol)
Integration of Production and Material Handling Simulations on a Unified Digital Twins Platform Keyhoon Ko (VMS Global, Inc.); Sungtae Lee (VMS Solutions, Co., Ltd.); Donguk Kim and Sangchul Park (Ajou University); and Byung-Hee Kim (VMS Solutions, Co., Ltd.) Abstract AbstractTraditionally, production and material handling simulations in semiconductor manufacturing have been managed separately, leading to inefficiencies. Material handling simulations typically focus on finding the fastest routes between points A and B but often fail to account for potential delays at point B before the next production step begins, highlighting the limitations of standalone approaches. This session presents a case study demonstrating how integrating these two critical systems on a unified digital twin platform has led to significant manufacturing optimization. Production simulations generate equipment schedules without considering logistics, while material handling simulations model equipment locations and identify high-traffic areas. By validating production schedules with material handling simulations, potential issues are identified, allowing the production simulation to incorporate constraints and re-optimize schedules. You may learn how this integrated approach has streamlined processes, improved coordination, and enhanced overall efficiency in semiconductor manufacturing. pdfFlexTerm in Action: Transforming Port and Terminal Operations Worldwide Alan (Yu) Zhang and Juliana Cordeiro (Moffatt & Nichol) Abstract AbstractFlexTerm, developed by Moffatt & Nichol, is a versatile digital-twin platform designed to simulate, emulate, and optimize port and terminal operations. Users can create detailed virtual models of container terminals, run simulations for both short-term and long-term scenarios in minutes, and integrate seamlessly with Terminal Operating Systems (TOS) for better decision-making, whether for conventional or automated terminals.
FlexTerm has been deployed globally to enhance operational efficiency, reduce costs, and streamline terminal layouts and processes. It supports modeling across a wide range of logistics scenarios, including container handling, rail facilities, RoRo operations, bulk materials, offshore wind ports, and cruise terminals.
This presentation will highlight FlexTerm's core capabilities, real-world applications, and key project experiences from around the world. More information on FlexTerm can be found at www.flexterm.com. pdf VendorSimulation Deployment Strategies Chair: Fred Jansma (InControl Enterprise Dynamics)
Anylogic – Software Ecosystem for Simulation Andrei Borshchev, Nikolay Churkov, and Ruslan Ibragimov (The AnyLogic Company) Abstract AbstractAnyLogic is the leading simulation modeling software ecosystem for business applications, utilized worldwide by over 40% of Fortune top 100 companies. It includes a model development environment (available in desktop and cloud versions), as well as cloud- or server-based execution environment for digital twins/models used operationally. We will give an overview of modeling capabilities of AnyLogic (featuring discrete event, system dynamics, and agent-based methods, and Material Handling, Process Modeling, Fluid, Pedestrian, Road Traffic, and Rail libraries) and show options for model deployment, connectivity, and workflow integration via AnyLogic Cloud. This year we are opening the Technology Preview of AnyLogic 9, our online model editor, for free public access, and you will be able to watch the full development-to-deployment cycle done from the browser – and, of course, try it yourself. pdfCreating Digital Twins for Cencora DC’s Fred Jansma (InControl Enterprise Dynamics) and Drew Grisham (Cencora) Abstract AbstractLast year InControl introduced ERS, a new simulation development platform capable of parallel scalable simulation. We will demonstrate the use of this platform in a newly developed simulation application that allows Cencora to perform simulations by simulation and non-simulation experts within the company. This enables Cencora to locally adapt to day-to-day operational changes in their distribution centers around North America. pdf VendorVendor Simulation Techniques Chair: Amy Greer (MOSIMTEC, LLC)
Crack the Code: Expert Strategies for Verification and Debugging in Simio Madison Evans and Jeffrey Smith (Simio LLC) Abstract AbstractIn the age of Digital Twins and Artificial Intelligence, data is everywhere, and the ability to harness it effectively is critical for success. Simio’s platform leverages this data revolution through data generated and driven models governed by Data Tables. However, the true value of these models is unlocked only through the proper use of Data Tables, making them essential to building robust, informative simulations. As with any modeling process, debugging and verification are critical parts of the development processes. This presentation explores expert techniques for debugging and verifying Data Tables within Simio, demonstrating how tools such as Trace, Watch, Breakpoints, and Notify can be leveraged to realize the full potential of data-driven, data-generated models. By cracking these expert strategies, the model-building process is accelerated to enhance the quality of simulations significantly. pdfSelecting Appropriate Simulation User Interfaces Amy Greer, Yusuke Legard, Nelson Alfaro Rivas, and Tae Ha (MOSIMTEC, LLC) Abstract AbstractA simulation user interface often takes a model from code to a tool organizations gain value from. This presentation focuses on why companies should invest in user interfaces for simulation models, what the different types of user interfaces are, and how an appropriate user interface option can be selected.
When selecting the appropriate user interface, several dimensions should be considered, including the number of users, user sophistication, data feeds to and from other applications, the frequency of model use, and the significance of the decision being made with the simulation model. This presentation will share examples of real-world projects with a variety of user interfaces to discuss why they were selected for their particular project. pdf VendorEmerging Simulation Tools Chair: Andrew Siprelle (Chiaha.ai)
Discrete Rate Simulation: Fast Answers to Strategic Questions in High-Speed/High-Volume Production Operations Andrew Siprelle and Luke Schwarzentraub (ChiAha) Abstract AbstractDiscrete Rate Simulation (DRS) has been a key enabling technology used to address canonical problems in high-speed manufacturing. In this talk, we discuss DRS: What it is, where it fits, and how it can provide extremely accurate and fast answers to questions every plant manager wants to know. We will demonstrate 2 new tools in the ChiAha toolkit, to be commercially released in early ‘25. Aidos Performance Predictor leverages a fast-executing DRS engine. It helps you build, validate and run Digital Twins of your high-speed production operations. Wishbone Interrupt Modeler leverages practical AI to help you turn raw production line event data into high-fidelity stops behavior. Come see how ChiAha’s revolutionary toolset can help you navigate from raw data to prediction! pdf VendorChiAha: Fast Answers for High-speed Production Problems Chair: Andrew Siprelle (Chiaha.ai)
ChiAha: Fast Answers for High-speed Production Problems Andrew Siprelle Abstract AbstractDiscrete Rate Simulation (DRS) has been a key enabling technology used to address canonical problems in high-speed manufacturing. In this talk, we discuss DRS: What it is, where it fits, and how it can provide extremely accurate and fast answers to questions every plant manager wants to know. We will demonstrate 2 new tools in the ChiAha toolkit, to be commercially released in early ‘25. Aidos Performance Predictor leverages a fast-executing DRS engine. It helps you build, validate and run Digital Twins of your high-speed production operations. Wishbone Interrupt Modeler leverages practical AI to help you turn raw production line event data into high-fidelity stops behavior. Come see how ChiAha’s revolutionary toolset can help you navigate from raw data to prediction!
PosterPoster Track Lightning Presentations Chair: Sara Shashaani (North Carolina State University); Zeyu Zheng (University of California, Berkeley)
Using Functional Properties to Screen Solutions for Multi-Objective Simulation Optimization Jinbo Zhao and David J. Eckman (Texas A&M University) Abstract AbstractWe propose a class of screening procedures designed for multi-objective simulation-optimization problems. The procedures screen out solutions that are deemed strictly Pareto-dominated by leveraging known or assumed properties of the functions describing solution performance, such as convexity and Lipschitz continuity, while simulating only a small subset of all solutions. Whether to screen out a given solution is determined by solving a non-convex mixed-integer program, the size of which scales with the number of simulated solutions and the number of objectives. Screening decisions are accompanied by several statistical guarantees, including uniform confidence and consistency. We discuss the computational tractability of the underlying optimization problem and applications in large-scale problems. pdfAgglomerative Clustering of Simulation Output Distributions Using Regularized Wasserstein Distance Mohammadmahdi Ghasemloo and David J. Eckman (Texas A&M University) Abstract AbstractWe investigate the use of clustering methods on data produced by a stochastic simulator, with applications in anomaly detection, pre-optimization, and online monitoring. We introduce an agglomerative clustering algorithm that clusters multivariate empirical distributions using the regularized Wasserstein distance and apply the proposed methodology on a call-center model. pdfA Machine Learning-Augmented Optimization For A Maximal Covering Location-Allocation Problem Kuangying Li, Hiruni Niwunhella, Leila Hajibabai, and Ali Hajbabaie (North Carolina State University) Abstract AbstractThis study tackles the maximal covering location-allocation problem (MCLP) by introducing a novel machine learning-enhanced column generation (ML-CG) method comprising offline training and online prediction. The proposed model is applied to a vaccine distribution case study, formulated as a mixed-integer linear model to minimize the total cost of vaccine shipments and maximize allocations, incorporating equity constraints for age, race, and gender groups. Empirical case studies in Pennsylvania, based on real-world data from the Centers for Disease Control and Prevention (CDC) and health department websites, confirm the model's effectiveness. pdfContainer-based Simulation Daniel Seufferth, Falk Stefan Pappert, Heiderose Stein, and Oliver Rose (Universität der Bundeswehr München) Abstract AbstractPopular methods like machine learning, simulation-based optimization, and data-farming require a simulation environment that supports scalable simulation workloads and provides access to distributed computational resources.
Containerization and container orchestration are promising methods for creating such a simulation execution platform.
Therefore, we provide a first concept for a portable, scalable, container-based simulation environment tailored to the future needs of various simulation and optimization methods. pdfSimulating the Federal Reserve as Dealer of Last Resort Donald Berndt (University of South Florida); David Boogers (FinaMetrics, Inc.); and Saurav Chakraborty (University of Louisville) Abstract AbstractIn today's global financial systems, a wide range of regulators focus on ensuring resilient markets. The Federal Reserve is among the most influential financial regulators in the world. The Fed is responsible for formulating monetary policy, in contrast to fiscal policy. The Great Recession and subsequent global pandemic ushered in a new era with global central banks taking unprecedented actions, extending emergency liquidity to a wide range of financial intermediaries and even directly intervening in markets. This paper focuses on using agent-based modeling (ABM) and simulation to better understand financial crisis dynamics. The overall aim is to highlight the promise of ABM approaches for evaluating regulatory policies, using the US corporate bond market as a case study. In particular, this paper focuses on extending the model with a Fed regulatory agent, guided by the real-world Corporate Bond Market Distress Index (CMDI). pdfProcess Digital Twin for Semiconductor Test Equipment Using DES and ML Zach Eyde (Arizona State University, Intel Corporation) and Robert Dodge and Giulia Pedrielli (Arizona State University) Abstract AbstractImplementing a digital twin (DT) in semiconductor manufacturing is a rising area of research. In this work, we address key challenges faced to a practical DT implementation in the context of Testing. Specifically, we consider Intel modules (Burn In, Structural Test and System-Level Test) and build an infrastructure for their twinning. According to our infrastructure a DT-agent contains several models whose accuracy and execution complexity are largely different and that can be adopted for providing insights offline as well as at runtime. We introduce the enabler for the Semiconductor Tester DT-agent which we constructed as a high-fidelity Discrete Event Model to support offline optimization and analytics tasks. Such model can be used to train a ML model that can, instead, be used at runtime. The team will investigate methodologies to auto-generate the DES from a base model and auto-repair the DES based on production data. pdfReal-Time Predictive Maintenance: A Digital Twin Approach With LLM-Generated Work Instructions John Williams, Gerald Jones, Xueping Li, and Tom Berg (University of Tennessee, Knoxville) and Luke Birt and Ashley Stowe (Consolidated Nuclear Security, LLC; Y-12 National Security Complex) Abstract AbstractThis paper introduces a digital twin system using Unreal Engine 5 (UE5), large language models (LLM), and the Message Queue Telemetry Transport (MQTT) protocol for real-time data synchronization, and visualization. LLMs generate accurate work instructions for maintenance, displayed in the virtual reality (VR) environment. Through a VR headset, managers visualize asset conditions, interact with virtual controls, and observe real-world reactions. The digital twin also serves as a training platform for personnel operating in high-security areas, enabling realistic scenario training before clearance. This system bridges the gap between physical asset monitoring and virtual simulation by leveraging MQTT for efficient data exchange, large language models for work instruction generation, and UE5 for immersive visualization. pdfOptimizing Shelf Movement In High-density Warehouse Through Simulation Kanau Kumekawa, Shizu Sakakibara, and Takufumi Yoshida (Toshiba Corporation) Abstract AbstractThis study examines the effectiveness of adjacently grouping multiple shelves before retrieving them in high-density warehouses, where shelves are positioned next to each other without any aisles in between. Our method models shelf retrieval as a game, aiming to retrieve targeted cells from a grid of empty and filled cells with minimal effort. Through simulations, we find that adjacently grouping shelves and moving them together is more efficient when two shelves are close to each other perpendicular to their goal direction and far from the goal. While the distance between shelves parallel to the goal direction is less significant. pdfDynamic Simulation and Control Algorithm Development for Automation in Shipyard Production Operations Hyewon Lee and Jong-Ho Nam (Korea Maritime & Ocean University) and Kyungbin Kim (Marine Tech-In Co., Ltd.) Abstract AbstractBlock lifting operations in shipyards are traditionally conducted manually using cranes. In this study, the control method of the block lifting operation was suggested for more accurate and efficient operation. Firstly, the Model Predictive Control (MPC) method was applied to address the underactuated problems prevalent in crane operations in shipyards. Meanwhile, to reduce the computation time of the MPC method, the dynamics model of the floating crane, which is a multibody system, was formulated by using the embedding technique. Finally, we formulated an optimization problem that could minimize deviations from the target position and orientation of the block. Especially, the physical constraints on the position of the block, and the threshold of the control inputs were applied to the MPC method. The method was successfully validated in block erection simulations, handling disturbances and constraints effectively. pdfSimulation-Based Airport Runway Performance Optimization By Modeling Multiple Control Tower Operations: A Case Study Ahmad Attar (University of Exeter) and Mahdi Babaee, Sadigh Raissi, and Majid Nojavan (Islamic Azad University, South Tehran Branch) Abstract AbstractThe substantial growth in the overall demand for air transportation makes the capacity of existing airports a key constraint for major cities in developing countries. One of any airport's main building blocks is its control tower, whose decisions influence the airport system's throughput and waiting times, particularly for civil purposes. This study investigates several operations and decisions involved in the landing and takeoff of airplanes. What makes the system more complex is the different time requirements for safely admitting various sizes of aircraft. This research develops a novel digital twin using discrete event simulation to study the outcome of different applicable expansion scenarios in the presence of such complex, realistic decisions. The study is applied to an international civil airport in the Middle East, yielding a substantial overall improvement in the realized runway capacity and a significant reduction in the aircraft waiting time. pdfAn Optimal Consolidation Policy for Vehicle Routing Plans in Last-Mile Delivery: A Deep Q-Learning Approach Seokgi Lee (Youngstown State University), Hyeong Suk Na (University of Missouri), and Yooneun Lee (University of Dayton) Abstract AbstractAs a result of dynamic changes in consumer shopping patterns such as purchasing more items more frequently from online retailers, large-scale online markets have a high split rate of orders. Consequently, consolidating and delivering split orders from the same customer has been addressed as a major problem in leveraging green logistics in online retailing. In this research, the last-mile logistics situations are analyzed to address two interrelated questions: (i) what would be the optimal order and shipment consolidation policy considering expected operational and economic efficiency in last-mile delivery, and (ii) what would be the resulting vehicle routing plan that improves operational and service performance. We develop an integrated decision-making framework that combines Deep Q-Network reinforcement learning with soft-update and feedback control to simultaneously optimize vehicle routing plans and consolidation policies. Experimental results show that the optimal consolidation policy improves transportation efficiency compared to other forced or non-forced transportation strategies. pdfShipyard Block Transport Scheduling Optimization via Simulation and Reinforcement Learning Seung Woo Han and Seung Heon Oh (Seoul National University), Ki Su Kim (HD Korea Shipbuilding & Offshore Engineering), and Jong Hun Woo (Seoul National University) Abstract AbstractThe blocks, which are fundamental units of ship construction, can only be transported using specialized vehicles known as transporters due to their weight and size. Therefore, optimizing the block transportation schedule at shipyards is of high importance. Previous studies have extensively addressed the optimization of block transportation scheduling based on metaheuristic approaches. This study proposes an effective optimization methodology capable of dynamic scheduling by combining reinforcement learning with graph neural networks. First, we construct a block transportation simulation that can calculate the empty travel time and tardiness of each block, which are the objectives of our research. Second, the Crystal Graph Convolutional Neural Network (CGCNN) and Proximal Policy Optimization (PPO) are combined to calculate the optimized policy for the block transportation schedule. The effectiveness of the proposed algorithm is demonstrated through simulations by comparing the results with heuristic and metaheuristic algorithms presented in previous research. pdfActive Fluid Gurney Flaps: An Effective Solution to Promote Sustainable Air Transportation Mario Lucas, Jorge Saavedra, and Luis Cadarso (Universidad Rey Juan Carlos) Abstract AbstractAircraft operational weight reduction is vital to enhance aviation sustainability. Using an Active Fluid Gurney Flap (AFGF) holds promise for significant weight reduction and enhanced controlled lift augmentation. This innovative technology removes the need for actuators, deployment mechanisms, and hyper-lift surfaces such as flaps. Moreover, its working principle grants precise control over lift adjustments during operation. The AFGF mechanism involves the injection of a jet flow at constant pressure over an orifice positioned at the trailing edge of the airfoil pressure side. Its efficacy is evaluated through 2D Computational Fluid Dynamics (CFD) simulations, exploring the impacts of Reynolds numbers and air jet injection pressures pdfImpact of Operating System Updates on Cybercriminal Access Duration: A Simulation-Based Study Jeongkeun Shin (Carnegie Mellon University), Tanav Changal (Troy High School), and Richard Carley and Kathleen Carley (Carnegie Mellon University) Abstract AbstractFor the best cybersecurity practices, it is essential to employ up-to-date operating systems. Organizations using outdated systems, particularly those no longer supported by manufacturers, are exposed to a broader spectrum of vulnerabilities. These vulnerabilities offer cybercriminals various opportunities to exploit during their attacks, increasing the likelihood of achieving their objectives. This paper leverages agent-based modeling to assess the risks associated with using outdated operating systems in organizations. We simulate a spearphishing campaign targeting virtual organizations that utilize either up-to-date or outdated operating systems. Our simulations incorporate a comprehensive cyber warfare scenario that includes MITRE ATT\&CK-based cyber attack and defense strategies, alongside a detailed organizational model. We find that when defense strategies fail to address vulnerabilities unique to outdated systems, cybercriminals can maintain persistent access to compromised devices. This results in significantly longer access times for attackers compared to organizations with modern operating systems. pdfSimulating Cyberattacks and Defense Mechanisms Against Emergency Vehicle Preemption Systems Dalal Alharthi (University of Arizona) and Montasir Abbas (Virginia Tech) Abstract AbstractThere are more than 320,000 traffic signals in the US. A significant number of these signals are equipped with emergency vehicle preemption (EVP) systems, where each emergency vehicle (EV) interrupts pre-designed signal operation plans. Recently, the difficulty of configuring EVP operations has been exacerbated by the potential for cybersecurity attacks that can spoof EVP calls or prevent actual calls from reaching the traffic controllers. It is, therefore, critically important to develop robust and efficient EVP systems that can detect cybersecurity attacks and operate the EVP system safely and optimally. This work uses a digital twin of a transportation network to simulate the EVP operation and identify system vulnerabilities and remediation techniques using the AnyLogic agent-based simulation platform. We simulate cyberattacks for both normal traffic and connected automated vehicles (CAVs) scenarios and propose a novel application of Zero-Trust Architecture (ZTA) to enhance the security of the EVP system. pdfDynamic Spare Parts Inventory Management Utilizing Machine Health Data Avital Kaufman, Jennifer Kruman, and Yale Herer (Technion – Israel Institute of Technology) Abstract AbstractWe present a novel approach for utilizing machine health data to improve spare parts supply chain performance in a multiple workstation redundancy-based production environment. This research was carried out in collaboration with Augury, an Israeli artificial intelligence company specializing in gathering and analyzing machine health data to boost productivity and profitability. We formulate and solve a new inventory problem that manifests itself with the knowledge of the health-state of machines. We model the system as a continuous-time Markov chain. Since finding exact solutions for all but the smallest instances is computationally prohibitive, we address the problem using simulation. We generated and tested a number of Markovian dynamic rules. Moreover, since we are using simulation, we exploited its flexibility to define and test some non-Markovian rules as well. Both types of rules outperformed the standard optimal base-stock policy. Savings in excess of eighteen percent are observed. pdfCovasim-G: A Tool for Projecting COVID-19 Health Burden by Demographic and Geographic Groups Alisa Hamilton (Johns Hopkins Applied Physics Laboratory, Johns Hopkins University); Geoffrey Clapp (Johns Hopkins Applied Physics Laboratory); Colton Ragland (University of Texas, Austin); Alexander Tulchinsky (One Health Trust); Eili Klein (One Health Trust, Johns Hopkins University Dept. of Emergency Medicine); and Gary Lin (Johns Hopkins Applied Physics Laboratory) Abstract AbstractEstimating infectious disease burden across social stratifiers with defined spatial scale is critical for designing policy responses to reduce overall burden and health disparities. In this study, we developed a tool (Covasim-G) for using the open-source agent-based model, Covasim, with a geographically realistic synthetic population. We simulated COVID-19 deaths in Maryland during the first wave of the pandemic and compared results to real-world data. Covasim-G was able to simulate COVID-19 deaths per 100,000 by age, gender, race and county within the same order of magnitude as observed data for Maryland. Covasim-G can be used for any US location in the Census, offering a useful tool for retrospective analyses and scenario modeling. pdfDeveloping a Simulation Model for Energy Consumption in Mushroom Substrate Composting Ezra Wari, Sandesh Risal, Weihang Zhu, and Venkatesh Balan (University of Houston) Abstract AbstractEnergy-efficient production and processing of agricultural products have become a top priority to reduce fossil fuel use, which contributes to global warming and climate change. Computer simulations can determine energy consumption for stochastic processes. This manuscript presents the development of a simulation model for a large-scale mushroom farm to audit energy expenditure and improve process efficiency in the substrate processing facility. The mushroom substrate composting involves steps like bale transportation, crushing, watering, nutrient supply, turning, and compost transport. The model analyzes energy requirements for each stage using industry equipment and vehicle specifications collected from literature. The simulation integrates these parameters to measure stochastic energy consumption during composting. We estimated the energy requirement for producing mushroom substrate compost and compared it with available data. This information aids in evaluating the lifecycle and techno-economic analysis and suggests improvements to reduce energy and resource requirements, enhancing process efficiency. pdfSimulation-based Analysis of the New Dynamic Electricity Prices in Germany and Derived Recommendations for Tax Policy Thomas Wiedemann (University of Applied Science Dresden) Abstract AbstractThe dynamic electricity price system, that have been available in Germany since 2024 is largely negated by the legislation in force at the same time, as constant charges of around 0,20 € are almost always incurred. On the basis of hourly dynamic electricity prices since the beginning of 2023, various simulation scenarios for market price and charge distribution alternatives were modeled and calculated, which could result in higher incentives for end customers with regard to storage expansion. Of course, real implementation would require an amendment to the laws and tax codes for energy suppliers. Whether this is possible in the current German political environment is currently being discussed together with energy market experts. pdfIsochrone Maps for Military Ground Movement Alexander Roman and Oliver Rose (University of the Bundeswehr Munich) Abstract AbstractA primary objective in contemporary military operations is to streamline planning processes, enabling quicker and more informed decision-making while alleviating burdens on both military and civilian personnel.
Our work aims to facilitate this planning, particularly for resupply operations, asset coordination, and predicting enemy troop movements.
We use isochrones on a geospatial road network to represent ground unit movement and visualize possible interactions between assets in a given timeframe.
How isochrones are shaped, and whether they overlap is information the planner can use for his decision-making.
We improve upon existing approaches by implementing a way to represent offroad-travel possibilities of ground assets.
The resulting software can be used in military training simulations.
Furthermore, we plan to utilize isochrones as action space representations for battlefield simulations. pdfStudy of a Naval Warfare Wargame Simulation-based Naval Ship Effectiveness Analysis Geon Woong Byeon, Seung Heon Oh, and Jong Hun Woo (Seoul National University) Abstract AbstractThe quantitative analysis of a naval ship's performance and engagement effectiveness is crucial for effective ship design and strategic operations. However, limitations exist in testing and analysis of real engagement systems. Therefore, Modeling and Simulation(M&S) or mathematical modeling and probabilistic models are employed to replicate the engagement environment. The previous research on naval engagement wargame simulation focuses on simulation frameworks rather than practical applications. In addition, few have modeled the interactions and logic of naval objects in detail. In response, this study proposes a new methodology for quantitatively evaluating the engagement performance of naval ships. The new methodology selects a specific modeling scope and applies explainable logic. In addition, the methodology is applicable to evaluate naval ships considering uncertainties and complex interactions on the engagement. Using the proposed methodology, this study develops a ship engagement performance evaluation framework. Additionally, a case study on improving the performance of naval ships. pdfBlind Modeling the Peace Game: Reverse Engineering a System Structure for a One-off Complex National Security Wargame Timothy Clancy (University of Maryland; Dialectic Simulations Consulting,LLC) and Michaela Gawrys, Molly Bauer, Raleigh Mann, Alex Jonas, Elizabeth Blake, and Robert Lamb (University of Maryland) Abstract AbstractSystem structure models of existing national security wargames could enable systemic analysis and facilitate computer simulation. However, few if any wargames have a system structure model of the game. And many wargames are run infrequently, maybe even once, with strict limits on observer participation. Our experiment was to determine if, using a modified systems thinking method, a “blind observer” could reverse engineer the “Peace Game” to identify core system structure from observed behavior modes. Each Peace Game runs as a two-day exercise with participants playing a country team challenged to develop a response to a variety of host nation complications. Participants must consider all U.S. government assets available to mitigate the crises. The exercise also includes participation from senior retired officers on the control team, simulating Washington leadership responses as well as host-nation officials. Our experimental findings include observed behavior modes, proposed system structure, and writeup of results. pdfExploring Electric Vehicle Market Dynamics with Agent-Based Modelling: The Impact of Design Strategies and Consumer Characteristics Juan Pablo Bertucci and Maurizio Clemente (Eindhoven University of Technology) Abstract AbstractIn this paper, we present an agent-based simulation framework to assess the effect of battery electric vehicle optimal design methodologies employed by original equipment manufacturers, on their market share and customer uptake.First, we create demographically accurate customer agents representing their characteristics based on recent real-world data, whereby we implement a choice model enabling each agent to make purchases according to their preferences. Second, we develop company agents who design and manufacture their vehicles according to the different optimal design methodologies: vehicle-tailored and concurrent design. Finally, we compare the evolution of the market over multiple years to obtain final profitability of the different strategies and final consumer vehicle adoption, with respect to base scenarios. Our results show that leveraging concurrent design increases the attractiveness of electric vehicles for customers, leading to increases in sales compared to the traditional vehicle-tailored approach. pdfUsing Unvalidated Prototype Models in the Problem Definition Phase for Agent-Based Social Simulation: Practical Insights Kotaro Ohori (Toyo University); Yusuke Goto, Shoya Sasaki, and Eito Oda (Shibaura Institute of Technology); and Shingo Takahashi (Waseda University) Abstract AbstractThis study investigates the use of a prototype model, which has not been validated, to define the policies and Key Performance Indicators (KPIs) that an Agent-Based Social Simulation (ABSS) should evaluate during the problem definition phase, before the model construction. An experiment was conducted using the prototype model in discussions with officials in Kamo City, Japan, on health promotion policies. Similar to the effects observed in a previous study with validated ABSS models, the prototype facilitated new perspectives and made it easier for stakeholders to express their opinions. However, it showed limitations in bringing to light concerns that had not been previously shared. Overall, while the prototype was not perfect, it proved useful in stimulating dialogue and eliciting ideas for policies and KPI considerations. pdf
Track Coordinator - Ph.D. Colloquium: Siyang Gao (City University of Hong Kong), Alison Harper (University of Exeter, The Business School), Cristina Ruiz-Martín (Carleton University), Eunhye Song (Georgia Institute of Technology) PhD ColloquiumPhD Colloquium Keynote: Simulations Unleashed: Empowering Decisions in the Age of AI Chair: Cristina Ruiz-Martín (Carleton University)
Simulations Unleashed: Empowering Decisions in the Age of AI Claudia Szabo (University of Adelaide) Abstract AbstractSimulations are game-changers for decision makers, and even more so in scenarios where the impact of decisions is critical and quick decisions need to be made in rapidly changing environments. AI technologies have transformed the way we think about data, predictions, and decisions, yet it is with and within simulations that they both become truly transformative. This keynote will explore the key role that simulations play in harnessing the power of AI, allowing us to model complex systems, predict outcomes, and test scenarios with high accuracy. We will look at how simulations can be used to model complex systems and how AI technologies can be used to model and predict decisions within these systems. We’ll discuss study design, stakeholder engagement and potential traps, as well as some recipes for success. pdf PhD ColloquiumPhD Colloquium Session 1 Chair: Cristina Ruiz-Martín (Carleton University)
A Digital Twin Approach to Support the Evolution of Cyber-Physical Systems Joost Mertens (University of Antwerp) Abstract AbstractA digital twin is a virtual representation of a real-world system of which data is continually collected. The data is fed back to the digital twin such that it may mirror system it reflects. In exchange, its users gain a variety of services. A key aspect in all digital twins is that of evolution, and this in two different ways. The first way deals with the mirroring of the real-world system when it evolves. The models in the digital twin should reflect that evolution. The second way deals with the evolution of the services in the twin itself. Like any other software system, the purpose and requirements of the digital twin evolve over time. In this thesis the focus is on a subset of issues encountered in these two types of digital twin evolution. It provides techniques that aid digital twin developers with the evolution of their digital twin. pdfA Digital Twin-based Simulator for Small Modular and Microreactors Zavier Ndum Ndum (Texas A&M University) Abstract AbstractThis paper presents the development and implementation of a mechanistic/physics–based Digital Twin (DT) simulator for a conceptual 4.5 MWth Lead-cooled Fast Reactor (LFR). The simulator leverages the MQTT protocol and MATLAB’s App Designer to enable real-time visualization and interaction with the reactor’s operational parameters. The system’s Graphical User Interface (GUI) mimics a reactor control room, facilitating risk-free experimentation, design visualization, testing, and optimization. The integration of virtual sensors and the ThingSpeakTM platform allowed for the seamless transition to real-time data streaming and analysis, enhancing the simulator's utility for training and visualization/demonstration of operational transients. pdfBayesian Optimization for Clinical Pathway Decomposition from Aggregate Data William Plumb (Imperial College London) Abstract AbstractData protection rules often impose anonymization requirements on datasets by means of aggregations that hinder the exact simulation of individual subjects. For example, clinical pathways that disclose medical conditions of patients may typically need to be aggregated to preserve anonymity of the subjects. However, aggregation unavoidably results in biasing the simulation process, for example, by introducing spurious pathways that can skew the simulated trajectories. In this paper, we study this problem and develop approximate decomposition methods that mitigate its impact. Our method is shown to produce from the raw aggregates pathways with higher fidelity than sampling a Markov chain model of the aggregate data, even preserving the same length of the original pathways. We observe a relative increase in average cosine similarity of up to 52% with respect to the true pathways compared with aggregate Markov chain sampling. pdfResilient Infrastructure Network Scheduling: A Hybrid Simulation and Reinforcement Learning Approach Pavithra Sripathanallur Murali (George Mason University) Abstract AbstractCritical infrastructure systems exhibit complex interdependencies across multiple levels, with their collective resilience affected by various factors. Traditional centralized resource allocation models often fall short in addressing the decentralized nature of these network-of-networks structures, where individual entities manage distinct components. This study presents a hybrid simulation approach that integrates top-down and bottom-up modeling to capture the dynamic and stochastic decision-making processes in infrastructure management. Integrating system dynamics for organizational-level budgetary decisions with agent-based modeling of maintenance activities and network evolution, this approach provides a holistic framework for analyzing interdependent infrastructures. Additionally, it leverages deep reinforcement learning to determine optimal restoration strategies for each network, accounting for financial constraints. Applied to the water distribution and mobility networks in Tampa, FL, this methodology effectively enhances the resilience of decentralized, interdependent infrastructure systems. pdfSimulating Federated Learning for Culvert Management in Utah Pouria Mohammadi (University of Utah) Abstract AbstractTransportation agencies increasingly adopt machine learning (ML) to enhance infrastructure management strategies. However, traditional centralized ML approaches face significant challenges due to data scarcity, which limits its effectiveness. To address this issue, we proposed using Federated Learning (FL), a novel approach that enables multiple agencies to collaboratively train ML models on their local datasets without sharing raw data, thus preserving data privacy. In our study, we developed and compared centralized and FL-based Artificial Neural Network (ANN) models using culvert datasets from six states. The FL models, especially when augmented with synthetic data, showed considerable improvements in predictive accuracy (30%), nearly matching the performance of centralized models. Our findings suggest that FL can effectively overcome data limitations, offering transportation agencies a more collaborative and data-driven approach to infrastructure management. This approach particularly benefits agencies with smaller datasets, enhancing their predictive capabilities and maintenance strategies. pdfGeneralizing the Generalized Likelihood Ratio Method through a Push-Out Leibniz Integration Approach Xingyu Ren (University of Maryland) Abstract AbstractWe extend the generalized likelihood ratio (GLR) method using a novel push-out Leibniz integration approach. Extending the conventional push-out likelihood ratio (LR) method, our approach allows parameter-dependent sample spaces after the change of variables, introducing a surface integral in addition to the standard LR estimator, which may require extra simulation. Our approach also applies to cases with only "local" change of variables, includes existing GLR estimators as special cases, and covers a broader range of discontinuous sample performances. pdfModeling Intra-hospital Transfers: the Effect of Care-spatial Information on Decision-making and Patient Flow Momoko Nakaoka (University of Cambridge) Abstract AbstractAn Intra-hospital Transfer (IHT) refers to the patients’ movement between hospital units, which includes changes in a patient’s location and the care responsible. Therefore, an IHT workflow involves interactions between clinical and non-clinical staff. Hospitals with beds shortage often experience a sequence of IHTs to create an available bed on a right place at the right time for an boarding ED patient. Group decisions on priority setting and resource allocation for IHTs can unintentionally disrupt patient flow without the necessary information or insight into the potential impact. Observational study and review revealed a lack of care-spatial information in the current systems used for IHT decisions. We proposed a modelling method to demonstrate the process of information exchange between teams using Discrete event simulation (DES). This showed that care-spatial information can support IHT decisions and suggested the potential of DES to quantify the impact of acquiring care-spatial information on patient flow. pdfConceptual Modeling for Simulation Model Reuse Xiaoting Song (University of Edinburgh Business School) Abstract AbstractThis doctoral research focuses on the development of a methodology for guiding simulation model reuse from the conceptual modeling stage. In simulation studies, neither the benefits of model reuse nor the important role played by conceptual modeling, are new. However, most of the focus in the existing approaches has been so far on the reuse of code or, where they integrate the reuse of conceptual models, lacks strong enough empirical evidence of the extent to which they indeed work or not. This PhD dissertation addresses both these gaps by proposing and validating a five-stage decision-making process template (discussed in a separate paper at this conference) and a step-by-step detailed method (still work in progress) to guide simulation practitioners in evaluating which of the past models they have access to should be reused in a new study, which model components should be reused, and how exactly. pdfA Smoothed Augmented Lagrangian Framework for Convex Optimization with Nonsmooth Stochastic Constraints Peixuan Zhang (Pennsylvania State University) Abstract AbstractMotivated by the need to develop simulation optimization methods for more general problem classes, we consider a convex stochastic optimization problem where both the objective and constraints are convex but possibly complicated by uncertainty and nonsmoothness. We present a smoothed sampling-enabled augmented Lagrangian framework that relies on inexact solutions to the AL subproblem. Under a constant penalty parameter, the dual suboptimality is shown to diminishes at a sublinear rate while primal infeasibility and suboptimality both diminish at a slower sublinear rate. pdfApplication of Deep Reinforcement Learning based on Graph Representation to Production Scheduling Optimization Young-in Cho (Seoul National University) Abstract AbstractIn this study, we propose a deep reinforcement learning (DRL) based scheduling framework with discrete-event simulation (DES). In the proposed method, a heterogeneous graph is employed for state representation to effectively capture the relational information between entities like machines, buffers, and jobs. As case studies, we apply the proposed method to dual crane scheduling problems and quay-wall allocation problems in the shipbuilding industry. pdfSystemic Population Responsibility and Territorial Digital Twin: Application to Copd Ana Lucia Tula (Ecole des Mines de Saint-Etienne) Abstract AbstractThis ongoing study aims to develop a territorial digital twin for Chronic Obstructive Pulmonary Disease (COPD) to enhance health strategies focusing on patient care and resource optimization. Initially, we used process mining to model the COPD clinical pathway, revealing common care patterns and disease stages. The model shows strong indicators of fitness and precision, with accuracy metrics over 60%. Currently, we are validating the model by comparing simulated costs with real-world data, and using Monte Carlo simulations and Markov chains to assess various health strategies and measure their impact in the clinical pathway. Preliminary results suggest that the model could be effective for simulations and testing new health measures. Future work will focus on integrating real-time data to optimize resource allocation and healthcare strategies and, ideally, to automate the process for application to other diseases, while adhering to Population Responsibility principles. pdfA Roadmap towards a Digital Twin for Automated Storage and Retrieval Systems Andrea Ferrari (Politecnico di Torino) Abstract AbstractAutomated warehouses have been playing a central role in contemporary supply chain operations, offering significant advantages over traditional systems. This research project aims to develop a digital twin (DT) for automated storage and retrieval systems. The study explores the value of DT proposing an architectural framework and investigating the main development steps. A discrete event simulation model was developed and integrated with a dynamic-programming optimization algorithm to generate the sequence of storage and retrieval missions for the handling machines. Validation against a small-scale warehouse setup demonstrated the model reliability in predicting operational times. A gated recurrent unit-based metamodel coupled with a particle swarm optimization algorithm was developed to solve the order picking sequencing problem. The study contributes to the theoretical understanding of DT by offering a structured approach to their development and integration. In practice, the DT serves as a powerful decision-support tool by providing real-time insights and predictive analytics. pdfRanking and Contextual Selection for Data-Driven Decision Making Gregory Keslin (Northwestern University) Abstract AbstractThis extended abstract is an overview of ranking and contextual selection (R&CS), a new procedure for ranking and selection with covariates. R&CS runs individual ranking-and-selection experiments at each covariate in an experiment design sampled from the covariate distribution. The systems selected by the ranking-and-selection procedures and the design itself form a classifier for selecting the best system at any future covariate value. Associated with the classifier is an assessment of its accuracy that is proven to satisfy a finite-sample coverage guarantee. pdfAnalyzing Transport Policies Effects in Developing Countries Kathleen Salazar Serna (Universidad Nacional de Colombia, Pontificia Universidad Javeriana) Abstract AbstractThis dissertation explores urban travel mode choices in developing countries, using an agent-based model simulation, with a case study in Cali, Colombia. This study investigates policy implications on individual decisions and the overall transportation system, incorporating motorcycles as a crucial mode of transport in developing regions. A survey was designed and conducted to gather sociocultural information used to represent agents and their travel behavior in the model. Survey data analysis reveals distinct preferences: low/middle-income groups prioritize cost and time, favoring motorcycles, while high-income individuals prioritize travel time, comfort and security, opting for cars. Preliminary policy simulations indicate positive impacts of interventions such as free public transportation, increased public transport capacity, and enhanced user security. This research informs the crafting of effective and sustainable urban transportation policies in the Global South, emphasizing the importance of tailored strategies for diverse commuter groups. pdf PhD ColloquiumPhD Colloquium Session 2 Chair: Cristina Ruiz-Martín (Carleton University)
Autonomous Pop-Up Attack Maneuver Using Imitation Learning Joao Dantas (Aeronautics Institute of Technology) Abstract AbstractThis study presents a methodology for developing models that replicate the complex pop-up attack maneuver in air combat, using flight data from a Brazilian Air Force pilot in a 6-degree-of-freedom flight simulator. By applying imitation learning techniques and employing a Long Short-Term Memory (LSTM) network, the research trained models to predict aircraft control inputs through sequences of state-action pairs. The model's performance, evaluated using Root Mean Squared Error (RMSE) and the coefficient of determination (R²), demonstrated its effectiveness in accurately replicating the maneuver. These findings highlight the potential of deploying such models in fully autonomous aircraft, enhancing autonomous combat systems' reliability and operational capabilities in real-world scenarios. pdfFeudal MADRL Frameworks: Synchronizing Independent PPO Agents for Scalable Multi-agent Coordination Austin Starken (University of Central Florida) Abstract AbstractMulti-agent deep reinforcement learning (MADRL) enables multiple agents to learn optimal strategies in intricate, ever-changing environments, utilizing deep reinforcement learning methodologies for cooperative and competitive scenarios. However, MADRL encounters challenges, including non-stationarity, shadowed equilibria, credit assignment issues, and communication overhead. This research draws on the hierarchical strategies of feudal reinforcement learning to integrate multiple independent Proximal Policy Optimization agents within a feudal framework to improve MADRL performance. Follower agents focus on learning specific tasks, while a leader agent synchronizes these tasks to achieve a unified goal. Initial findings demonstrate that this feudal approach improves training efficiency and scalability over traditional methods. These results indicate that feudal MADRL frameworks are well-suited for managing complex coordination tasks and hold promise for advancing research in more sophisticated environments and multi-team systems. pdfTradeoff-Aware Bayesian Active Learning with Varying Cost for Feasible Region Identification Ioana Nikova (Siemens Industry Software, Ghent University) Abstract AbstractUnderstanding design requirements in engineering design, requires identifying regions in the design space that satisfy these constraints. This is called feasible region identification. As running (random) simulations is expensive, a cost- and data-efficient sampling approach is needed to find the feasible designs. Bayesian Active Learning (AL) is an iterative sampling method that uses a surrogate, e.g. a Gaussian process, and an acquisition function to select the next sample. This research focuses on creating new acquisition functions. On the one hand, cost-aware variants are investigated. These acquisition functions incorporate an unknown simulation cost and sample more designs using the same budget while also finding more feasible designs. On the other hand, we look at the exploration-exploitation trade-off of the acquisition function as a two-objective problem which leads to the creation of two acquisition functions based on scalarization methods. The scalarization-based acquisition functions often outperform most state-of-the-art acquisition functions. pdfRobust Confidence Bands for Stochastic Processes Using Simulation Jangwon Park (University of Toronto) Abstract AbstractIn many applications of stochastic simulation, outputs are sample paths of stochastic processes. A natural way to validate a simulation model in this setting is to construct a confidence band over the sample paths at a specified level of coverage probability and check whether historical paths from the actual system fall within this band. We propose a robust optimization approach for constructing confidence bands, which, contrary to existing methods, directly addresses optimization bias within the constraints to prevent overly narrow confidence bands. In our first case study, we show that our approach achieves the desired coverage probabilities with an order-of-magnitude fewer sample paths than the state-of-the-art baseline approach. In our second case study, we illustrate how our approach can validate stochastic simulation models. pdfImpact of Stochastic Time Windows in Planning for Construction Projects Serhii Naumets (University of Alberta) Abstract AbstractIn my thesis, I attempt to rethink the foundation of conventional construction engineering planning. I developed a new cost analysis algorithm as an alternative to the traditional time-cost trade-off approach. The new algorithm was built upon the existing time-cost trade-off knowledge and tailored to the time-window planning method. The main difference from the traditional approach lay in pivoting the focus from global optimum to emphasizing planning of alternative execution methods and their ranking. To accommodate the pivot, I framed a so-called reward function as an indicator of how favorable each alternative execution method is in the context of total project time and cost. It was achieved by simulating all possible scenarios of a project plan and deriving the reward function for each alternative. The resulting output of the simulation gives an insight into which activities have the most impact on the project time and cost and to what extent. pdfBayesian Optimization for Recurring, Simulation-based Decision Making in Production and Logistics Philipp Zmijewski (University of Applied Sciences Osnabrück) Abstract AbstractThe thesis proposes a framework to accelerate Bayesian Optimization (BO) for recurrent simulation-based decision-making in production and logistics. BO is recognized for its sample efficiency but is often hindered by computational intensity, making it less suitable for scenarios requiring short-term decisions. The proposed framework TMPBO seeks to integrate transfer learning, meta-learning, and parallel computation to reduce optimization times and enhance convergence rates to make BO practical for operational decisions. The research will focus on leveraging historical optimization data and modern computing capabilities to improve BO's efficiency, making it viable for high-dimensional, multi-objective problems. The framework
will be evaluated through a multi-stage process. The thesis aims to demonstrate that the TMPBO framework can offer significant advantages over classic BO and evolutionary algorithms in optimizing recurrent decision-making tasks. pdfData-driven Agent-based Pedestrian Modeling and Simulation in Indoor Environments Amartaivan Sanjjamts (Osaka University) Abstract AbstractModeling pedestrian behavior in diverse environments presents a complex challenge that requires a nuanced and adaptable approach. Our research aims to develop a comprehensive framework for pedestrian dynamics through Agent-Based Modeling and Simulation, guided by data-driven insights. The framework specifically addresses challenges related to design validation and space utilization in indoor environments, with a focus on data-driven initialization, deep learning-assisted model calibration, and incorporating agent heterogeneity to enhance realism. By integrating deep learning with traditional parameter search methods such as Particle Swarm Optimization, the framework also emphasizes establishing a robust model validation process to ensure confidence in its accuracy and applicability for real-world scenarios. pdfSequential Quadratic Programming for Optimal Transport Zihe Zhou (Purdue University) Abstract AbstractThe Monge optimal transport (OT) problem seeks to optimize the transportation cost between two probability measures. The optimization is over a function space and the transportation cost is defined by a cost functional of the maps. Many recent works focus on addressing the OT problem computationally using finite approximations. In this work, we present the infinite-dimensional OT problem over a Banach space. We provide explicit expressions for the first and second-order variation of the objective functional, and of the function form constraint. We propose a Sequential Quadratic Programming (SQP) framework and show that subject to reasonable regularity assumptions, our framework satisfies Alt’s SQP condition, immediately yielding local convergence. Moreover, we demonstrate that a merit functional effectively serves as a step-size monitor, leading to global convergence towards a critical point. To the best of our knowledge, this is the first attempt at a globally convergent SQP operator recursion over infinite-dimensional spaces. pdfTowards the Integration of a Production Plant Digital Twin Exploiting MES Data and Process Mining Michela Lanzini (University of Brescia) Abstract AbstractThis research focuses on analyzing the functional requirements for implementing a Digital Twin (DT) in a manufacturing environment, with the aim of improving decision-making for scheduling activities through simulation. The project is being developed in collaboration with a company in northern Italy; aiming to create a Discrete Event Simulation (DES) model for predictive process monitoring and to propose rescheduling options in response to potential delays. The primary challenges anticipated include integrating data from diverse sources such as ERP and MES systems, ensuring data quality and adapting the existing system to support the DT framework. It will be crucial to integrate the DES model with other tools and techniques, such as Process Mining, to achieve dataflows integration and synchronization. pdfGlobal Multi-Objective Simulation Optimization with Low-Dispersion Point Sets Burla Ondes (Purdue University) Abstract AbstractConsider the context of a Multi-Objective Simulation Optimization (MOSO) problem. The goal of solving a MOSO problem is to identify decision points that map to the global Pareto set. For such problems featuring Lipschitz continuous objectives on a compact feasible set, we determine how one should tradeoff the number of decision points to sample using a low dispersion point set with the number of simulation replications per sampled point to ensure the estimated Pareto set converges to the true Pareto set under a novel performance measure called the modified coverage error. This performance measure enables us to quantify upper bounds on the deterministic and stochastic errors as a function of the dispersion of the decision points sampled and the number of simulation replications per point. The upper bounds and tradeoff analysis lead to an efficient method for solving global MOSO problems with probabilistic guarantees. pdfEconomic Sustainability of Dynamic Ride-Sharing: A Simulation-Based Analysis Wang Chen (The University of Hong Kong) Abstract AbstractDynamic ride-sharing enables a vehicle to carry two or more passengers with similar schedules and itineraries on each trip, increasing traffic efficiency and vehicle utilization. Existing studies mainly focus on the environmental sustainability of dynamic ride-sharing, such as quantifying the reduced carbon emissions, while overlooking its economic sustainability. This study aims to analyze the economic sustainability of dynamic ride-sharing based on massive simulations. Specifically, this study first develops an agent-based simulation platform that can simulate large-scale ride-sharing and non-ride-sharing services simultaneously. Then, this study conducts a revealed preference survey to calibrate passengers' price and detour elasticity. Finally, using real-world mobility data from ten cities, this study conducts massive simulations on the developed simulator and compares the average revenue of each vehicle under various scenarios with different discounts and detour ratios. This study finds that even a low discount (e.g., 20%) can decrease the average revenue. pdfGenerative AI and Simulation-driven Predictive Modeling Framework: Enhancing Resilience and Risk Management under Disruptions Seung Ho Woo (Purdue University) Abstract AbstractMaintaining stability and resilience in complex systems by predicting and mitigating risks is essential in the modern era. This research investigates generative AI-enhanced predictive modeling methodologies to strengthen risk mitigation and supply chain resilience. In particular, generative AI is employed to enhance data augmentation with multimodal fusion, adaptive parameter tuning, disruption environment generation, and dynamic model updating. A novel generative AI-based predictive modeling framework is then developed, which combines diverse types of data with advanced learning algorithms to increase the accuracy of the prediction and simulate the impact of disruptions. The preliminary findings suggest that the proposed generative AI-integrated predictive models show advancement in risk and supply chain resilience management and reveal the potential of models for robust prediction. pdf
Best Contributed Papers
Best Contributed Applied Paper
Selection Committee: Eric Jing Du (University of Florida), Tugce Martagan (Eindhoven University of Technology), Giulia Pedrielli (Arizona State University)
Simheuristics for Strategic Workforce Planning at a Busy Airport Johanna Wiesflecker, Maurizio Tomasella, and Thomas W. Archibald (The University of Edinburgh) Abstract AbstractAirport demand frequently exceeds capacity. An airport's capacity bottleneck is often located in its runway system. Other times, it is elsewhere, including the terminal facilities. At one of the busiest UK airports, the subject of this study, the bottleneck is located in the passenger security hall, where staff are employed directly by the airport operator. This airport plans to restructure the current workforce's overall size and types of contracts. Our work proposes a Simheuristic approach that utilizes current workforce data and demand forecasts for the upcoming season to adjust the contractual configurations systematically. We employ simulation to test the expected costs (vs flexibility needs) of the likely rosters that each contractual configuration will allow. Using the simulation results, the algorithm aims to identify the optimal contractual configuration to minimize costs while ensuring the adaptability required to address unforeseen changes in the flight schedule as each season is underway. pdf
Best Contributed Papers
Best Contributed Theoretical Paper
Selection Committee: Canan Gunes Corlu (Boston University), L. Jeff Hong (University of Minnesota), Wei Xie (Northeastern University)
Enhancing Language Model with Both Human and Artificial Intelligence Feedback Data Haoting Zhang, Jinghai He, Jingxu Xu, Jingshen Wang, and Zeyu Zheng (University of California, Berkeley) Abstract AbstractThe proliferation of language models has marked a significant advancement in technology and industry in recent years. The training of these models largely involves human feedback, a procedure that faces challenges including intensive resource demands and subjective human preferences. In this work, we incorporate feedback provided by artificial intelligence (AI) models instead of relying entirely on human feedback. We propose a simulation optimization framework to train the language model. The objective function for training is approximated using feedback from both human and AI models. We employ the method of control variate to reduce the variance of the approximated objective function. Additionally, we provide a procedure for deciding the sample size to acquire preferences from both human and AI models. Numerical experiments demonstrate that our proposed procedure enhances the performance of the language model. pdf
|