A Heuristic-based Rolling Horizon Method for Dynamic and Stochastic Unrelated Parallel Machine Scheduling Shufang Xie, Tao Zhang, and Oliver Rose (Universität der Bundeswehr München) Program Track: Manufacturing and Industry 4.0 Program Tags: AnyLogic, Distributed Abstract AbstractIn stochastic manufacturing environments, disruptions such as machine breakdowns, variable processing times, and unexpected delays make static scheduling approaches ineffective. To address this, we propose a heuristic-based rolling horizon scheduling method for unrelated parallel machines. The rolling horizon framework addresses system stochasticity by enabling dynamic adaptation through frequent rescheduling of both existing jobs and those arriving within a rolling lookahead window. This method decomposes the global scheduling problem into smaller, more manageable subproblems. Each subproblem is solved using a heuristic approach based on a suitability score that incorporates key factors such as job properties, machine characteristics, and job-machine interactions. Simulation-based experiments show that the proposed method outperforms traditional dispatching rules in dynamic and stochastic manufacturing environments with a fixed number of jobs, achieving shorter makespans and cycle times, reduced WIP levels, and lower machine utilization. pdfA Hybrid Simulation-based Approach for Adaptive Production and Demand Management in Competitive Markets S. M. Atikur Rahman, Md Fashiar Rahman, and Tzu-Liang Bill Tseng (The University of Texas at El Paso) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractManaging production, inventory, and demand forecasting in a competitive market is challenging due to consumer behavior and market dynamics. Inefficient forecasting can lead to inadequate inventory, an interrupted production schedule, and eventually, less profit. This study presents a simulation-based decision support framework integrating discrete event simulation (DES) and system dynamics (SD). DES models production and inventory management to ensure optimized resource utilization, while SD is employed to incorporate market dynamics. This model jointly determines demand through purchase decisions from potential users and replacement demand from existing adopters. Further refinements prevent sales declines and sustain long-term market stability. This hybrid simulation approach provides insights into demand evolution and inventory optimization, aiding strategic decision-making. Finally, we propose and integrate a dynamic marketing strategy algorithm with the simulation model, which results in around 38% more demand growth than the existing demand curve. The proposed approach was validated through rigorous experimentation and optimization analysis. pdfA Novel System Dynamics Approach to DC Microgrid Power Flow Analysis Jose González de Durana (University of the Basque Country) and Luis Rabelo and Marwen Elkamel (University of Central Florida) Program Track: Modeling Methodology Program Tags: AnyLogic, Complex Systems, Conceptual Modeling, System Dynamics Abstract AbstractThis paper employs System Dynamics (SD) to model and analyze DC power distribution systems, focusing on methodological development and using microgrids as case studies. The approach follows a bottom-up methodology, starting with the fundamentals of DC systems and building toward more complex configurations. We coin this approach “Power Dynamics,” which uses stocks and flows to represent electrical components such as resistors, batteries, and power converters. SD offers a time-based, feedback-driven approach that captures component behaviors and system-wide interactions. This framework provides computational efficiency, adaptability, and visualization, enabling the integration of control logic and qualitative decision-making elements. Three case studies of microgrids powered by renewable energy demonstrate the framework’s effectiveness in simulating energy distribution, load balancing, and dynamic power flow. The results highlight SD’s potential as a valuable modeling tool for studying modern energy systems, supporting the design of flexible and resilient infrastructures. pdfAI-based Assembly Line Optimization in Aeronautics: a Surrogate and Genetic Algorithm Approach Maryam SAADI (Airbus Group, IMT Ales); Vincent Bernier (Airbus Group); and Gregory Zacharewicz and Nicolas Daclin (IMT) Program Track: Simulation and Artificial Intelligence Program Tags: AnyLogic, Neural Networks, Python Abstract AbstractIndustrial configuration planning requires testing many setups, which is time-consuming when each scenario must be evaluated through detailed simulation. To accelerate this process, we train a Multi-Layer Perceptron (MLP) to predict key performance indicators (KPIs) quickly, using it as a surrogate model. However, classical regression metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) do not reflect prediction quality in all situations. To solve this issue, we introduce a classification-based evaluation strategy. We define acceptable prediction margins based on business constraints, then convert the regression output into discrete classes. We assess model performance using precision and recall. This approach reveals where the model makes critical errors and helps decision-makers at Airbus Helicopters trust the AI’s predictions. pdfAn Agent-Based Framework for Sustainable Perishable Food Supply Chains Maram Shqair (Auburn University); Karam Sweis, Haya Dawkassab, and Safwan Altarazi (German Jordanian University); and Konstantinos Mykoniatis (Auburn University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Input Modeling, Supply Chain Abstract AbstractThis study presents an agent-based modeling framework for enhancing the efficiency and sustainability of perishable food supply chains. The framework integrates forward logistics redesign, reverse logistics, and waste valorization into a spatially explicit simulation environment. It is applied to the tomato supply chain in Jordan, restructuring the centralized market configuration into a decentralized closed loop system with collection points, regional hubs, and biogas units. The model simulates transportation flows, agent interactions, and waste return through retailer backhauls. Simulation results show a 31.1 percent reduction in annual transportation distance and cost, and a 35.9 percent decrease in transportation cost per ton. The proposed approach supports cost-effective logistics and a more equitable distribution of transport burden, particularly by shifting a greater share to retailers. Its modular structure, combined with reliance on synthetic data and scenario flexibility, makes it suitable for evaluating strategies in fragmented, resource-constrained supply chains. pdfBuilding a Climate Responsive Agent-Based Modeling Simulation for the Walkability of the Tropical Hot and Humid Environment Daniel Jun Chung Hii and Takamasa Hasama (Kajima Corporation); Majid Sarvi (The University of Melbourne); and Marcel Ignatius, Joie Yan Yee Lim, Yijun Lu, and Nyuk Hien Wong (National University of Singapore) Program Track: Environment, Sustainability, and Resilience Program Tags: AnyLogic, Open Source Abstract AbstractClimate change affects thermal comfort and wellness by restricting walkability potential of the built environment. This is especially in the outdoors under the harsh solar radiation exposure of the tropical hot and humid climate. Passive shading strategy plays the most significant role in the walkability potential. Vegetation and man-made structures such as pavements provide shade for comfortable navigation, with the latter being a more sustainable and wellbeing friendly solution. The walkability potential can be simulated using agent-based modelling (ABM) technique. As a heat mitigation strategy to improve the walkability, the most direct intervention is to improve the connectivity of the shading zone along the shortest path between strategic locations. People tend to walk faster and choose the shortest path when dealing with direct sun exposure while avoiding it totally if it gets unbearably hot. The ABM simulation is useful for efficient urban planning of walkability potential in campus. pdfCombining Optimization and Automatic Simulation Model Generation for Less-Than-Truckload Terminals Lasse Jurgeleit, Patrick Buhle, Maximilian Mowe, and Uwe Clausen (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: AnyLogic Abstract AbstractRecent advances allow Less-Than-Truckload (LTL) terminals to know the distribution of arriving goods on inbound trucks in advance. Therefore, assigning docks to inbound and outbound relations and trucks to docks is a critical problem for terminal operators. This paper introduces a framework combining automatic model generation and optimization. The approach aims to allow testing of suggestions from multiple optimization algorithms. The relevance and feasibility of this approach in finding an appropriate optimization algorithm for a given system are demonstrated through a simplified case study of a variation of the dock assignment problem. This paper demonstrates how such a combination can be constructed and how the methods can effectively complement each other, using the example of LTL terminals. pdfConceptual Hybrid Modelling Framework Facilitating Scope 3 Carbon Emissions Evaluation for High Value Manufacturing Okechukwu Okorie, Victoria Omeire, Paul Mativenga, and Maria Sharmina (The University of Manchester) and Peter Hopkinson (The University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, Conceptual Modeling Abstract AbstractExisting manufacturing research on greenhouse gas emissions often focuses on Scope 1 and Scope 2 emissions and underestimates Scope 3 emissions, which are indirect emissions from a firm’s value chains, city and region consumption. Traditional methodologies for evaluating carbon emissions are limited for Scope 3 emissions, due to the complexity of manufacturing supply chains and lack of quality data, leading to incomplete carbon accounting and potential double-counting. This challenge is pronounced for high value manufacturing, an emergent manufacturing perspective, due to the complexity of its supply chain network. This study develops a comprehensive hybrid modeling framework for evaluating Scope 3 emissions at product level, useful for manufacturers and modelers. pdfDevelopment of a Library of Modular Components to Accelerate Material Flow Simulation in the Aviation Industry Hauke Stolz, Philipp Braun, and Hendrik Rose (Technical University of Hamburg) and Helge Fromm and Sascha Stebner (Airbus Group) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Java Abstract AbstractAircraft manufacturing presents significant challenges for logistics departments due to the complexity of processes and technology, as well as the high variety of parts that must be handled. To support the development and optimization of these complex logistics processes in the aviation industry, simulation is commonly employed. However, existing simulation models are typically tailored to specific use cases. Reusing or adapting these models for other aircraft-specific applications often requires substantial implementation and
validation efforts. As a result, there is a need for flexible and easily adaptable simulation models. This work aims to address this challenge by developing a modular library for logistics processes in aircraft manufacturing. The outcome of this work highlights the simplifications introduced by the developed library and its application in a real aviation warehouse. pdfEvaluating Third-Party Impacts In Urban Air Mobility Community Integration: A Digital Twin Approach Alexander Ireland and Chun Wang (Concordia University) Program Track: Simulation as Digital Twin Program Tag: AnyLogic Abstract AbstractUrban Air Mobility presents unique community impacts. UAM vehicles mostly operate above populated areas while traditional aviation typically operates point-to-point between populated areas, so impacts on third parties are increasingly important. This paper explores the approach of using a Digital Twin to optimize EVTOL vehicle flight planning by using accurate live population data to minimize third-party safety and privacy impacts. Live population density data is translated into an equivalent agent-based simulation and used to calculate safety and privacy metrics. A Montreal vertiport case study compares a baseline EVTOL approach suggested by the FAA to 128 alternate approach scenarios to find generalizations and prove the usefulness of Digital Twin technology for UAM operational optimization. It was found that flight path characteristics suggested by regulators are not necessarily optimal when considering third-party impacts, and by extension that Digital Twins are promising technology that will play a significant role in making UAM safer. pdfIdentification of Spatial Energy Demand Shift Flexibilities of EV Charging on Regional Level Through Agent-Based Simulation Paul Benz and Marco Pruckner (University of Würzburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Emergent Behavior Abstract AbstractOpen access to electric vehicle charging session data is limited to a small selection provided by operators of mostly public or workplace chargers. This restriction poses a hurdle in research on regional energy demand shift flexibilities enabled by smart charging, since usage characteristics between different charging options remain hidden. In this paper, we present an agent-based simulation model with parameterizable availability and usage preferences of public and private charging infrastructure to access insights of charging behavior otherwise only visible through proprietary data. Thus, we enable utility operators to estimate spatial charging energy distribution and support the integration of renewable energy by showing potentials for smart charging. In a first application, we point out how increased access and use of private charging facilities can lead to additional energy demand in rural municipalities, which, in turn, leads to a lower grid load in urban centers. pdfImpact of Battery Electric Trucks on Intermodal Freight Transportation - An Agent-based Simulation Study Eric Reynolds (Motlow State Community College), Nasim Nezamoddini (Oakland University), and Mustafa Can Camur and Xueping Li (University of Tennessee) Program Track: Agent-based Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractThis paper applies an agent-based simulation model to examine the feasibility of battery electric trucks (BETs) in intermodal freight transportation, focusing on the Memphis hub network. Two infrastructure deployment stages, depot charging only and depot plus destination charging, are modeled and simulated using AnyLogic platform to study truck utilization patterns. Real-world manufacturing sites are chosen, and the trucks are routed along roadways using a Geographic Information System (GIS) map. Battery charge levels and charging infrastructure are modeled under both scenarios. Four electric truck models from various manufacturers including Tesla Semi, Nikola Tre, Volvo VNR, and Freightliner eCascadia are compared in terms of performance and utilization. Results showed that battery electric trucks are a feasible solution for intermodal trucking operations and transporting goods from manufacturers to destinations. This comparison also highlights effects of changing shifts and adding opportunity charging at destinations on truck utilization under different battery efficiencies and capacities. pdfIntegrating Decision Field Theory Within System Dynamics Framework For Modeling the Adoption Process of Ride Sourcing Services Best Contributed Theoretical Paper - Finalist Seunghan Lee (Mississippi State University) and Jee Eun Kang (University at Buffalo, SUNY) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractThe rise of ride-sourcing services has changed the transportation industry, reshaping urban mobility services. This paper presents an integrated framework of the adoption of ride-sourcing services and its impact on transportation markets using a combined approach of System Dynamics (SD) and Extended-Decision Field Theory (E-DFT). Drawing on data from ride-sourcing platforms such as Uber and Lyft, the study investigates the temporal dynamics and trends of ride-sourcing demand. SD modeling is employed to capture the complex interactions and feedback loops within the ride-sourcing ecosystem at system-level. The integration of System Dynamics and extended DFT allows for a more comprehensive and holistic modeling of the ride-sourcing market. It enables exploration of various scenarios and policy interventions, providing insights into the long-term behavior of the market and facilitating evidence-based decision-making by policymakers and industry stakeholders while accommodating individual users' decisions based on changing preferences and environments. pdfLeveraging OpenStreetMap Information to Identify Cluster Centers in Aggregated Movement Data Maylin Wartenberg and Luca Marco Heitmann (Hochschule Hannover) and Marvin Auf der Landwehr (FehrAdvice & Partners AG) Program Track: Data Science and Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractAggregated movement data is widely used for traffic simulations, but privacy constraints often limit data granularity, requiring the use of centroids as cluster representatives. However, centroids might locate cluster centers in contextually irrelevant areas, such as an open field, leading to inaccuracies. This paper introduces a method that leverages an aggregation of points of interest (POIs) such as bus stops or buildings from OpenStreetMap as cluster centers. Using trip data from a suburban region in Germany, we evaluate the spatial deviation between centroids, POIs, and real trip origins and destinations. Our findings show that POI-based centers reduce spatial deviation by up to 46% compared to centroids, with the greatest improvements in rural areas. Furthermore, in an agent-based mobility simulation, POI-based centers significantly reduced travel distances. These results demonstrate that POI-based centers offer a context-aware alternative to centroids, with significant implications for mobility modeling, urban planning, and traffic management. pdfModular Digital Twins: The Foundation for the Factory of the Future Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST); Lasse Jurgeleit (TU Dortmund University); and Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: AnyLogic, Supply Chain Abstract AbstractCompanies face stiff challenges regarding their value chains' circular and digital transformation. Digital Twins are a valuable and powerful tool to ease such transformation. Yet, Digital Twins are not just one virtualized model but several parts with different functions. This paper analyzes Digital Twins’ frameworks and reference models on an architectural level. We derive a modular framework displaying best practices based on empirical data from particular use cases. Hereby, we concentrate on discrete manufacturing processes to leverage benefits for the factory of the future. According to a design science cycle, we also demonstrate and evaluate the modular framework in a real-world application in an assembly line. The study provides an overview of the state-of-the-art for Digital Twin frameworks and shows ways for easy implementation and avenues for further development. As a synthesis of particular architectures, the modular approach offers a novel and thoroughly generizable blueprint for Digital Twins. pdfSales Planning Using Data Farming in Trading Networks Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) and Alexander Wuttke, Markus Rabe, Hendrik van der Valk, and Mario di Benedetto (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Data Analytics, Python Abstract AbstractVolatile customer demand poses a significant challenge for the logistics networks of trading companies. To mitigate the uncertainty in future customer demand, many products are produced to stock with the goal to be able to meet the customers’ expectations. To adequately manage their product inventory, demand forecasting is a major concern in the companies’ sales planning. A promising approach besides using observational data as an input for the forecasting methods is simulation-based data generation, called data farming. In this paper, purposeful data generation and large-scale experiments are applied to generate input data for predicting customer demand in sales planning of a trading company. An approach is presented for using data farming in combination with established forecasting methods such as random forests. The application is discussed on a real-world use case, highlighting benefits of the chosen approach, and providing useful and value-adding insights to motivate further research. pdfSimulation-based Analysis of a Hydrogen Infrastructure to Supply a Regional Hydrogen Hub Michael Teucke, Abderrahim Ait Alla, Lennart Steinbacher, Eike Broda, and Michael Freitag (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) Program Track: Agent-based Simulation Program Tags: AnyLogic, Supply Chain Abstract AbstractMany countries plan to adopt hydrogen as a major energy carrier, which requires a robust infrastructure to meet rising demand. This paper presents a simulation model quantitatively analyzing the capacity of a potential hydrogen infrastructure in a coastal region of Northern Germany to supply a hydrogen hub in Bremen. The model covers ship-based imports of hydrogen, either as liquid hydrogen or ammonia, unloading at port terminals, conversion to gaseous hydrogen, pipeline transport to the hub, and end-use consumption. Various scenarios are simulated to quantitatively assess infrastructure needs under projected demand. Results show that ammonia-based imports offer greater supply reliability under low and medium demand, while liquid hydrogen performs better under high demand due to faster unloading times. Demand-driven supply policies generally outperform fixed-interval approaches by maintaining higher storage levels and aligning supply more closely with consumption patterns. pdf
DDA-PDES: A Data-Dependence Analysis Parallel Discrete-Event Simulation Framework for Event-Level Parallelization of General-Purpose DES Models Erik J. Jensen; James F. Leathrum, Jr.; Christopher J. Lynch; and Katherine Smith (Old Dominion University) and Ross Gore (Old Dominion University, Center for Secure and Intelligent Critical Systems) Program Track: Modeling Methodology Program Tags: C++, Parallel Abstract AbstractUtilizing data-dependence analysis (DDA) in parallel discrete-event simulation (PDES) to find event-level parallelism, we present the DDA-PDES framework as an alternative to spatial-decomposition (SD) PDES. DDA-PDES uses a pre-computed Independence Time Limit (ITL) table to efficiently identify events in the pending-event set that are ready for execution, in a shared-memory-parallel simulation engine. Experiments with AMD, Qualcomm, and Intel platforms using several packet-routing network models and a PHOLD benchmark model demonstrate speedup of up to 8.82x and parallel efficiency of up to 0.91. In contrast with DDA-PDES, experiments with similar network models in ROSS demonstrate that SD-PDES cannot speed up the packet-routing models without degradation to routing efficacy. Our results suggest DDA-PDES is an effective method for parallelizing discrete-event simulation models that are computationally intensive, and may be superior to traditional PDES methods for spatially-decomposed models with challenging communication requirements. pdfDEVS Models for Arctic Major Maritime Disasters Hazel Tura Griffith and Gabriel A. Wainer (Carleton University) Program Track: Modeling Methodology Program Tags: C++, DEVS, Rare Events Abstract AbstractModern modelling and simulation techniques allow us to safely test the policies used to mitigate disasters. We show how the DEVS formalism can be used to ease the modelling process by exploiting its modularity. We show how a policymaker’s existing models of any type can be recreated with DEVS so they may be reused in any new models, decreasing the number of new models that need to be made. We recreate a sequential decision model of an arctic major maritime disaster developed by the Canadian government as a DEVS model to demonstrate the method. The case study shows how DEVS allows policymakers to create models for studying emergency policies with greater ease. This work shows a method that can be used by policymakers, including models of emergency scenarios, and how they can benefit from creating equivalent DEVS models, as well as exploiting the beneficial properties of the DEVS formalism. pdfLLM Prompt Engineering for Performance in Simulation Software Development: A Metric-Based Approach to Using LLMs James F. Leathrum, Abigail S. Berardi, and Yuzhong Shen (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tag: C++ Abstract AbstractSimulation software development is crucial for advancing digital engineering, scientific research, and innovation. The emergence of Generative AI, especially Large Language Models (LLMs), introduces possibilities for automating code generation, documentation, testing, and refactoring. Prompt engineering has become a key method of translating human intent into automated software output. However, integrating LLMs into software workflows requires reliable evaluation methods. This paper proposes a metric-based framework that uses software metrics to assess and guide LLM integration. Treating prompts as first-class artifacts, the framework supports improvements in code quality, maintainability, and developer efficiency. Performance, measured by execution time relative to a known high-performance codebase, is used as the initial metric to study. Work focuses on the impact of assigning roles in prompts and refining prompt engineering strategies to generate high-performance software through structured preambles. The work provides the foundation for LLM generated software starting from a well-defined simulation model. pdfOptimizing Task Scheduling in Primary Healthcare: a Reinforcement Learning Approach with Agent-based Simulation Cristián Cárdenas (Universidad Austral de Chile), Gabriel Bustamante and Hernan Pinochet (Universidad de Santiago de Chile), Veronica Gil-Costa (Universidad Nacional de San Luis), Luis Veas-Castillo (Universidad Austral de Chile), and Mauricio Marin (Universidad de Santiago de Chile) Program Track: Agent-based Simulation Program Tag: C++ Abstract AbstractThe integration of Agent-Based Simulation (ABS) and Reinforcement Learning (RL) has emerged as a promising and effective approach for supporting decision-making in medical and hospital settings. This study proposes a novel framework that combines an Agent-Based Simulation with a Double Deep Q-Network (DDQN) Reinforcement Learning model to optimize task scheduling of healthcare professionals responsible for elderly patient care. Simulations were conducted over a 365-day period involving 250 patients, each managed by a healthcare coordinator who schedules appointments. Patients autonomously decide whether to attend appointments and adhere to medical recommendations. Results show the effectiveness of the RL model in minimizing health risks, with 84.8% of patients maintaining or improving their initial health risk levels, while only 15.2% experienced an increase. pdfSimulation-based Design of the LENR System John Richard Clymer (John R Clymer & Associates), Amar Vakil (Search Data Laboratory), and Keryn Johnson (3Quantum Tech. Limited and IMU LLC) Program Track: Simulation and Artificial Intelligence Program Tags: C++, Complex Systems Abstract AbstractThe Information Universe (IU) communicates with the Material Universe (MU) to create and repair atoms. This is required because quarks and bosons that make up atoms have a relatively short life and must be replaced. The communication messages are described by a context-sensitive language specified using message generating rules. The SUSY (Supersymmetric) inversion model is a process defined by these rules that describes how subatomic particles are made and combined to create or repair atoms; indeed, there is a language message (a sequence of process actions) for every IU/MU system regulatory problem. An OpEMCSS (Operational Evaluation Model for Complex Sensitive Systems) simulation model of the IU/MU system can learn these rules to gain an understanding of the SUSY messaging process. The IU/MU system simulation model will be used to learn and generate messages that result in the LENR (Low Energy Nuclear Reaction) system producing useful new physics. pdf
A Baseline Simulation of Hybrid Misinformation and Spearphishing Campaigns in Organizational Networks Jeongkeun Shin, Han Wang, L. Richard Carley, and Kathleen M. Carley (Carnegie Mellon University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Cybersecurity Abstract AbstractThis study presents an agent-based simulation that examines how pre-attack misinformation amplifies the effectiveness of spearphishing campaigns within organizations. A virtual organization of 235 end user agents is modeled, each assigned unique human factors such as Big Five personality traits, fatigue, and job performance, derived from empirical data. Misinformation is disseminated through Facebook, where agents determine whether to believe and spread false content using regression models from prior psychological studies. When agents believe misinformation, their psychological and organizational states degrade to simulate a worst-case scenario. These changes increase susceptibility to phishing emails by impairing security-related decision-making. Informal relationship networks are constructed based on extraversion scores, and network density is varied to analyze its effect on misinformation spread. The results demonstrate that misinformation significantly amplifies organizational vulnerability by weakening individual and collective cybersecurity-relevant decision-making, emphasizing the critical need to account for human cognitive factors in future cybersecurity strategies. pdfA Formal and Deployable Gaming Operation to Defend IT/OT Networks Ranjan Pal, Lillian Bluestein, Tilek Askerbekov, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Military and National Security Applications Program Tags: Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo Abstract AbstractThe cyber vulnerability terrain is largely amplified in critical infrastructure systems (CISs) that attract exploitative (nation-state) adversaries. This terrain is layered over an IT and IoT-driven operational technology (OT) network that supports CIS software applications and underlying protocol communications. Usually, the network is too large for both cyber adversaries and defenders to control every network resource under budget constraints. Hence, both sides strategically want to target 'crown jewels' (i.e., critical network resources) as points of control in the IT/OT network. Going against traditional CIS game theory literature that idealistically (impractically) model attacker-defense interactions, we are the first to formally model real-world adversary-defender strategic interactions in CIS networks as a simultaneous non-cooperative network game with an auction contest success function (CSF) to derive the optimal defender strategy at Nash equilibria. We compare theoretical game insights with those from large-scale Monte Carlo game simulations and propose CIS-managerial cyber defense action items. pdfA Framework for Modeling and Simulation of Multi-dimensional Coupled Socio-Environmental Networked Experiments Vanessa Ines Cedeno (University of Virginia, Escuela Superior Politecnica del Litoral) and Majid Shafiee-Jood (University of Virginia) Program Track: Environment, Sustainability, and Resilience Program Tags: Complex Systems, Conceptual Modeling Abstract AbstractCoupled Socio-Environmental Networked experiments have been used to represent and analyze complex social phenomena and environmental issues. There is a lack of theory on how to accurately model diverse entities and the connections between them across different spatial and temporal scales. This gap often leads to significant challenges in the modeling, simulating, and analysis of formal experiments. We propose a framework that facilitates software implementation of multi-dimensional coupled socio-environmental networked experiments. Our approach includes: (i) a formal data model paired with a computational model, together providing abstract representations, and (ii) a modeling cycle that maps socio-environmental interactions over time, allowing for multi-action, interactive experiments. The framework is flexible, allowing for a wide variety of network models, interactions, and action sets. We demonstrate its applicability through a case study on agroecological transitions, showing how the modeling cycle and data model can be used to explore socio-environmental phenomena. pdfA Hybrid Simulation-based Approach for Adaptive Production and Demand Management in Competitive Markets S. M. Atikur Rahman, Md Fashiar Rahman, and Tzu-Liang Bill Tseng (The University of Texas at El Paso) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractManaging production, inventory, and demand forecasting in a competitive market is challenging due to consumer behavior and market dynamics. Inefficient forecasting can lead to inadequate inventory, an interrupted production schedule, and eventually, less profit. This study presents a simulation-based decision support framework integrating discrete event simulation (DES) and system dynamics (SD). DES models production and inventory management to ensure optimized resource utilization, while SD is employed to incorporate market dynamics. This model jointly determines demand through purchase decisions from potential users and replacement demand from existing adopters. Further refinements prevent sales declines and sustain long-term market stability. This hybrid simulation approach provides insights into demand evolution and inventory optimization, aiding strategic decision-making. Finally, we propose and integrate a dynamic marketing strategy algorithm with the simulation model, which results in around 38% more demand growth than the existing demand curve. The proposed approach was validated through rigorous experimentation and optimization analysis. pdfA Novel System Dynamics Approach to DC Microgrid Power Flow Analysis Jose González de Durana (University of the Basque Country) and Luis Rabelo and Marwen Elkamel (University of Central Florida) Program Track: Modeling Methodology Program Tags: AnyLogic, Complex Systems, Conceptual Modeling, System Dynamics Abstract AbstractThis paper employs System Dynamics (SD) to model and analyze DC power distribution systems, focusing on methodological development and using microgrids as case studies. The approach follows a bottom-up methodology, starting with the fundamentals of DC systems and building toward more complex configurations. We coin this approach “Power Dynamics,” which uses stocks and flows to represent electrical components such as resistors, batteries, and power converters. SD offers a time-based, feedback-driven approach that captures component behaviors and system-wide interactions. This framework provides computational efficiency, adaptability, and visualization, enabling the integration of control logic and qualitative decision-making elements. Three case studies of microgrids powered by renewable energy demonstrate the framework’s effectiveness in simulating energy distribution, load balancing, and dynamic power flow. The results highlight SD’s potential as a valuable modeling tool for studying modern energy systems, supporting the design of flexible and resilient infrastructures. pdfA Simulation-enabled Framework for Mission Engineering Problem Definition: Integrating Ai-driven Knowledge Retrieval with Human-centered Design Rafi Soule and Barry C. E (Old Dominion University) Program Track: Modeling Methodology Program Tags: Complex Systems, Conceptual Modeling, Python Abstract AbstractMission Engineering (ME) requires coordination of multiple systems and stakeholders, but often suffers from unclear problem definitions, fragmented knowledge, and limited engagement. This paper proposes a hybrid methodology integrating Retrieval-Augmented Generation (RAG), Human-Centered Design (HCD), and Participatory Design (PD) within a Model-Based Systems Engineering (MBSE) framework. The approach generates context-rich, stakeholder-aligned mission problem statements, as demonstrated in the Spectrum Lab case study, ultimately improving mission effectiveness and stakeholder collaboration. pdfAgent-based Social Simulation of Spatiotemporal Process-triggered Graph Dynamical Systems Zakaria Mehrab, S.S. Ravi, Henning Mortveit, Srini Venkatramanan, Samarth Swarup, Bryan Lewis, David Leblang, and Madhav Marathe (University of Virginia) Program Track: Agent-based Simulation Program Tags: Complex Systems, Emergent Behavior, System Dynamics Abstract AbstractGraph dynamical systems (GDSs) are widely used to model and simulate realistic multi-agent social dynamics, including societal unrest. This involves representing the multiagent system as a network and assigning functions to each vertex describing how they update their states based on the neighborhood states. However, in many contexts, social dynamics are triggered by external processes, which can affect the state transitions of agents. The classical GDS formalism does not incorporate such processes. We introduce the STP-GDS framework, that allows a GDS to be triggered by spatiotemporal background processes. We present a rigorous definition of the framework followed by formal analysis to estimate the size of the active neighborhood under two types of process distribution. The real-life applicability of the framework is further highlighted by an additional case study involving evacuation due to natural events, where we analyze collective agent behaviors under heterogeneous environmental and spatial settings. pdfCharacterizing Digital Factory Twins: Deriving Archetypes for Research and Industry Jonas Lick and Fiona Kattenstroth (Fraunhofer Institute for Mechatronic Systems Design IEM); Hendrik Van der Valk (TU Dortmund University); and Malte Trienens, Arno Kühn, and Roman Dumitrescu (Fraunhofer Institute for Mechatronic Systems Design IEM) Program Track: Manufacturing and Industry 4.0 Program Tag: Complex Systems Abstract AbstractThe concept of the digital twin has evolved to a key enabler of digital transformation in manufacturing. The adoption of digital twins for factories or digital factory twins remain fragmented and often unclear, particularly for small and medium-sized enterprises. This study addresses this ambiguity by systematically deriving archetypes of digital factory twins to support clearer classification, planning, and implementation. Based on a structured literature review and expert interviews, 71 relevant DFT use cases were identified. The result of the conducted cluster analysis is four distinct archetypes: (1) Basic Planning Factory Twin, (2) Advanced Simulation Factory Twin, (3) Integrated Operations Factory Twin, and (4) Holistic Digital Factory Twin. Each archetype is characterized by specific technical features, data integration levels, lifecycle phases, and stakeholder involvement. pdfConceptual Hybrid Modelling Framework Facilitating Scope 3 Carbon Emissions Evaluation for High Value Manufacturing Okechukwu Okorie, Victoria Omeire, Paul Mativenga, and Maria Sharmina (The University of Manchester) and Peter Hopkinson (The University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, Conceptual Modeling Abstract AbstractExisting manufacturing research on greenhouse gas emissions often focuses on Scope 1 and Scope 2 emissions and underestimates Scope 3 emissions, which are indirect emissions from a firm’s value chains, city and region consumption. Traditional methodologies for evaluating carbon emissions are limited for Scope 3 emissions, due to the complexity of manufacturing supply chains and lack of quality data, leading to incomplete carbon accounting and potential double-counting. This challenge is pronounced for high value manufacturing, an emergent manufacturing perspective, due to the complexity of its supply chain network. This study develops a comprehensive hybrid modeling framework for evaluating Scope 3 emissions at product level, useful for manufacturers and modelers. pdfDiscrete Event Simulation for Assessing the Impact of Bus Fleet Electrification on Service Reliability Best Contributed Applied Paper - Finalist Minjie Xia, Wenying Ji, and Jie Xu (George Mason University) Program Track: Project Management and Construction Program Tags: Complex Systems, Data Driven, Python, Resiliency Abstract AbstractThis paper aims to derive a simulation model to evaluate the impact of bus fleet electrification on service reliability. At its core, the model features a micro discrete event simulation (DES) of an urban bus network, integrating a route-level bus operation module and a stop-level passenger travel behavior module. Key reliability indicators—bus headway deviation ratio, excess passenger waiting time, and abandonment rate—are computed to assess how varying levels of electrification influence service reliability. A case study of route 35 operated by DASH in Alexandria, VA, USA is conducted to demonstrate the applicability and interpretability of the developed DES model. The results reveal trade-offs between bus fleet electrification and service reliability, highlighting the role of operational constraints and characteristics of electric buses (EBs). This research provides transit agencies with a data-driven tool for evaluating electrification strategies while maintaining reliable and passenger-centered service. pdfEVIMAS - Digital Twin-Based Electric Vehicle Infrastructure Modeling And Analytics System Aparna Kishore, Kazi Ashik Islam, and Madhav Marathe (University of Virginia, Biocomplexity Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Python Abstract AbstractThe growing shift to electric vehicles (EVs) presents significant challenges due to the complexities in spatial, temporal, and behavioral aspects of adoption and infrastructure development. To address these challenges, we present the EV Infrastructure Modeling and Analytics System (EVIMAS), a modular and extensible software system built using microservices principles. The system comprises three loosely coupled components: (i) a data processing pipeline that constructs a static digital model using diverse inputs, (ii) a modeling and simulation pipeline for simulating dynamic, multi-layered interactions, and (iii) an analytics pipeline that supports task execution and the analysis of results. We demonstrate the utility of the EVIMAS via three case studies. Our studies show that such analysis can be done efficiently under varying constraints and objectives, including geographic regions, analytical goals, and input configurations. EVIMAS supports fine-grained, agent-based EV simulations, facilitating the integration of new components, data, and models for EV infrastructure development. pdfEvaluating Comprehension of Agent-Based Social Simulation Visualization Techniques: A Framework Based on Statistical Literacy and Cognitive Processing Kotaro Ohori and Kyoko Kageura (Toyo University) and Shohei Yamane (Fujitsu Ltd.) Program Track: Agent-based Simulation Program Tags: Complex Systems, Output Analysis Abstract AbstractAgent-based social simulation (ABSS) has gained attention as a powerful method for analyzing complex social phenomena. However, the visualization of ABSS outputs is often difficult to interpret for users without expertise in ABSS modeling. This study analyzes how statistical literacy affects the comprehension of ABSS visualizations, based on cognitive processes defined in educational psychology. A web-based survey using five typical visualizations based on Schelling’s segregation model was conducted in Japan. The results showed a moderate positive correlation between statistical literacy and visualization comprehension, while some visualizations remained difficult to interpret even for participants with high literacy. Further machine learning analysis revealed that model performance varied by cognitive stage, and that basic and applied statistical skills had different impacts on comprehension across stages. These findings provide a foundation for designing visualizations tailored to user characteristics and offer insights for effective communication based on ABSS. pdfExplainability in Digital Twins: Overview and Challenges Meryem Mahmoud (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc-Kinney Moller Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Data Driven Abstract AbstractDigital Twins are increasingly being adopted across industries to support decision-making, optimization, and real-time monitoring. As these systems and, correspondingly, the underlying models of their corresponding Digital Twins, grow in complexity, there is a need to enhance explainability at several points in the Digital Twins. This is especially true for safety-critical systems and applications that require Human-in-the-Loop interactions. Ensuring explainability in both the underlying simulation models and the related decision-support mechanisms is key to trust, adoption, and informed decision-making. While explainability has been extensively explored in the context of machine learning models, its role in simulation-based Digital Twins remains less examined. In this paper, we review the current state of the art on explainability in simulation-based Digital Twins, highlighting key challenges, existing approaches, and open research questions. Our goal is to establish a foundation for future research and development, enabling more transparent, trustworthy, and effective Digital Twins. pdfExploring Data Requirements for Data-Driven Agent-Based Modeling Hui Min Lee (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc Kinney Moller Institute) Program Track: Data Science and Simulation Program Tags: Complex Systems, Data Driven Abstract AbstractExtracting Agent-Based Models (ABMs) from data, also known as Data-Driven Agent-Based Modeling (DDABM), requires a clear understanding of data requirements and their mappings to the corresponding ABM components. DDABM is a relatively new and emerging topic, and as such, there are only highly customized and problem-specific solutions and approaches. In our previous work, we presented a framework for DDABM, identifying the different components of ABMs that can be extracted from data. Building on this, the present study provides a comprehensive analysis of existing DDABM approaches, examining prevailing trends and methodologies, focusing on the mappings between data and ABM components. By synthesizing and comparing different DDABM approaches, we establish explicit mappings that clarify data requirements and their role in enabling DDABM. Our findings enhance the understanding of DDABM and highlight the role of data in automating model extraction, highlighting its potential for advancing data-driven agent-based simulations. pdfExtending Social Force Model for the Design and Development of Crowd Control and Evacuation Strategies using Hybrid Simulation Best Contributed Applied Paper - Finalist Aaron LeGrand and Seunghan Lee (Mississippi State University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Emergent Behavior, Rare Events Abstract AbstractEfficient crowd control in public spaces is critical for mitigating threats and ensuring public safety, especially in scenarios where live testing environments are limited. It is important to study crowd behavior following disruptions and strategically allocate law enforcement resources to minimize the impact on civilian populations to improve security systems and public safety. This paper proposes an extended social force model to simulate crowd evacuation behaviors in response to security threats, incorporating the influence and coordination of law enforcement personnel. This research examines evacuation strategies that balance public safety and operational efficiency by extending social force models to account for dynamic law enforcement interventions. The proposed model is validated through physics-based simulations, offering insights into effective and scalable solutions for crowd control at public events. The proposed hybrid simulation model explores the utility of integrating agent-based and physics-based approaches to enhance community resilience through improved planning and resource allocation. pdfGenerative Statecharts-Driven PDEVS Behavior Modeling Vamsi Krishna Vasa and Hessam S. Sarjoughian (Arizona State University) and Edward J. Yellig (Intel Corporation) Program Track: Modeling Methodology Program Tags: Complex Systems, DEVS Abstract AbstractBehavioral models of component-based dynamical systems are integral to building useful simulations. Toward this goal, approaches enabled by Large Language Models (LLMs) have been proposed and developed to generate grammar-based models for Discrete Event System Specification (DEVS). This paper introduces PDEVS-LLM, an agentic framework to assist in developing Parallel DEVS (PDEVS) models. It proposes using LLMs with statecharts to generate behaviors for parallel atomic models. Enabled with PDEVS concepts, plausible facts from the whole description of a system are extracted. The PDEVS-LLM is equipped with grammars for the PDEVS statecharts and hierarchical coupled model. LLM agents assist modelers in (re-)generating atomic models with conversation histories. Examples are developed to demonstrate the capabilities and limitations of LLMs for generative PDEVS models. pdfHierarchical Population Synthesis Using a Neural-Differentiable Programming Approach Imran Mahmood Q. Hashmi, Anisoara Calinescu, and Michael Wooldridge (University of Oxford) Program Track: Agent-based Simulation Program Tags: Complex Systems, Neural Networks, Open Source, Python Abstract AbstractAdvances in Artificial Intelligence have enabled more accurate and scalable modelling of complex social systems, which depend on realistic high-resolution population data. We introduce a novel methodology for generating hierarchical synthetic populations using differentiable programming, producing detailed demographic structures essential for simulation and analysis. Existing approaches struggle to model hierarchical population structures and optimise over discrete demographic attributes. Leveraging feed-forward neural networks and Gumbel-Softmax encoding, our approach transforms aggregated census and survey data into continuous, differentiable forms, enabling gradient-based optimisation to match target demographics with high fidelity. The framework captures multi-scale population structures, including household composition and socio-economic diversity, with verification via logical rules and validation against census cross tables. A UK case study shows our model closely replicates real-world distributions. This scalable approach provides simulation modellers and analysts with, high-fidelity synthetic populations as input for agent-based simulations of complex societal systems, enabling behaviour simulation, intervention evaluation, and demographic analysis. pdfIntegrating Decision Field Theory Within System Dynamics Framework For Modeling the Adoption Process of Ride Sourcing Services Best Contributed Theoretical Paper - Finalist Seunghan Lee (Mississippi State University) and Jee Eun Kang (University at Buffalo, SUNY) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractThe rise of ride-sourcing services has changed the transportation industry, reshaping urban mobility services. This paper presents an integrated framework of the adoption of ride-sourcing services and its impact on transportation markets using a combined approach of System Dynamics (SD) and Extended-Decision Field Theory (E-DFT). Drawing on data from ride-sourcing platforms such as Uber and Lyft, the study investigates the temporal dynamics and trends of ride-sourcing demand. SD modeling is employed to capture the complex interactions and feedback loops within the ride-sourcing ecosystem at system-level. The integration of System Dynamics and extended DFT allows for a more comprehensive and holistic modeling of the ride-sourcing market. It enables exploration of various scenarios and policy interventions, providing insights into the long-term behavior of the market and facilitating evidence-based decision-making by policymakers and industry stakeholders while accommodating individual users' decisions based on changing preferences and environments. pdfMulti-agent Market Simulation for Deep Reinforcement Learning With High-Frequency Historical Order Streams David Byrd (Bowdoin College) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Open Source, Python Abstract AbstractAs artificial intelligence rapidly co-evolves with complex modern systems, new simulation frameworks are needed to explore the potential impacts. In this article, I introduce a novel open source multi-agent financial market simulation powered by raw historical order streams at nanosecond resolution. The simulation is particularly targeted at deep reinforcement learning, but also includes momentum, noise, order book imbalance, and value traders, any number and type of which may simultaneously trade against one another and the historical order stream within the limit order books of the simulated exchange. The simulation includes variable message latency, automatic agent computation delays sampled in real time, and built-in tools for performance logging, statistical analysis, and plotting. I present the simulation features and design, demonstrate the framework on a multipart DeepRL use case with continuous actions and observations, and discuss potential future work. pdfMulti-flow Process Mining as an Enabler for Comprehensive Digital Twins of Manufacturing Systems Atieh Khodadadi and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Petri Nets Abstract AbstractProcess Mining (PM) has proven useful for extracting Digital Twin (DT) simulation models for manufacturing systems. PM is a family of approaches designed to capture temporal process flows by analyzing event logs that contain time-stamped records of relevant events. With the widespread availability of sensors in modern manufacturing systems, events can be tracked across multiple process dimensions beyond time, enabling a more comprehensive performance analysis. Some of these dimensions include energy and waste. By integrating and treating these dimensions analogously to time, we enable the use of PM to extract process flows along multiple dimensions, an approach we refer to as multi-flow PM. The resulting models that capture multiple dimensions are ultimately combined to enable comprehensive DTs that support multi-objective decision-making. In this paper, we present our approach to generating these multidimensional discrete-event models and, through an illustrative case study, demonstrate how they can be utilized for multi-objective decision support. pdfReview and Classification of Challenges in Digital Twin Implementation for Simulation-Based Industrial Applications Alexander Wuttke (TU Dortmund University), Bhakti Stephan Onggo (University of Southampton), and Markus Rabe (TU Dortmund University) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems Abstract AbstractDigital Twins (DTs) play an increasingly important role in connecting physical objects to their digital counterparts, with simulation playing a key role in deriving valuable insights. Despite their potential, DT implementation remains complex and adoption in industrial operations is limited. This paper investigates the challenges of DT implementation in simulation-based industrial applications through systematically reviewing 124 publications from 2021 to 2024. The findings reveal that while nearly half of the publications tested prototypes, most are limited to laboratory settings and lack critical features such as cybersecurity or real-time capabilities. Discrete Event Simulation and numerical simulation emerge as the dominantly utilized simulation techniques in DTs. From the analysis, 33 challenges are identified and the classification of them into nine dimensions is proposed. Finally, further research opportunities are outlined. pdfSelf-Organization in Crowdsourced Food Delivery Systems Berry Gerrits and Martijn Mes (University of Twente) Program Track: Agent-based Simulation Program Tags: Complex Systems, Netlogo Abstract AbstractThis paper presents an open-source agent-based simulation model to study crowd-sourced last-mile food delivery. Within this context, we focus on a system that allows couriers with varying degrees of autonomy and cooperativeness to make decisions about accepting orders and strategically relocating. We model couriers as agents in an agent-based simulation model implemented in NetLogo. Our approach provides the necessary parameters to control and balance system performance in terms of courier productivity and delivery efficiency. Our simulation results show that moderate levels of autonomy and cooperation lead to improved performance, with significant gains in workload distribution and responsiveness to changing demand patterns. Our findings highlight the potential of self-organizing and decentralized strategies to improve scalability, adaptability, and fairness in platform-based food delivery logistics. pdfSimulation-based Design of the LENR System John Richard Clymer (John R Clymer & Associates), Amar Vakil (Search Data Laboratory), and Keryn Johnson (3Quantum Tech. Limited and IMU LLC) Program Track: Simulation and Artificial Intelligence Program Tags: C++, Complex Systems Abstract AbstractThe Information Universe (IU) communicates with the Material Universe (MU) to create and repair atoms. This is required because quarks and bosons that make up atoms have a relatively short life and must be replaced. The communication messages are described by a context-sensitive language specified using message generating rules. The SUSY (Supersymmetric) inversion model is a process defined by these rules that describes how subatomic particles are made and combined to create or repair atoms; indeed, there is a language message (a sequence of process actions) for every IU/MU system regulatory problem. An OpEMCSS (Operational Evaluation Model for Complex Sensitive Systems) simulation model of the IU/MU system can learn these rules to gain an understanding of the SUSY messaging process. The IU/MU system simulation model will be used to learn and generate messages that result in the LENR (Low Energy Nuclear Reaction) system producing useful new physics. pdfSimulation-based Dynamic Job Shop Scheduling Approach to Minimize the Impact of Resource Uncertainties Md Abubakar Siddique, Selim Molla, Amit Joe Lopes, and Md Fashiar Rahman (The University of Texas at El Paso) Program Track: Manufacturing and Industry 4.0 Program Tags: Complex Systems, FlexSim Abstract AbstractThe complexity of job shops is characterized by variable product routing, machine reliability, and operator learning that necessitates intelligent assignment strategies to optimize performance. Traditional models often rely on first-available machine selection, neglecting learning curves and processing time variability. To overcome these limitations, this paper introduces the Data-Driven Job Shop Scheduling (DDJSS) framework, which dynamically selects machines based on the status of resources at the current time steps. To evaluate the effectiveness of the proposed frameworks, we developed two scenarios using FlexSim to perform a thorough analysis. The results demonstrated significant improvements in key performance indicators, including reduced waiting time, lower queue length, and higher throughput. The output is increased by over 144% and 348%, for some exemplary jobs in the case studies mentioned in this paper. This study highlights the value of integrating learning behavior and data-driven assignments for improving decision-making in flexible job shop environments. pdfToward Automating System Dynamics Modeling: Evaluating LLMs in the Transition from Narratives to Formal Structures Jhon G. Botello (Virginia Modeling, Analysis, and Simulation Center) and Brian Llinas, Jose Padilla, and Erika Frydenlund (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Conceptual Modeling, Output Analysis, System Dynamics Abstract AbstractTransitioning from narratives to formal system dynamics (SD) models is a complex task that involves identifying variables, their interconnections, feedback loops, and the dynamic behaviors they exhibit. This paper investigates how large language models (LLMs), specifically GPT-4o, can support this process by bridging narratives and formal SD structures. We compare zero-shot prompting with chain-of-thought (CoT) iterations using three case studies based on well-known system archetypes. We evaluate the LLM’s ability to identify the systemic structures, variables, causal links, polarities, and feedback loop patterns. We present both quantitative and qualitative assessments of the results. Our study demonstrates the potential of guided reasoning to improve the transition from narratives to system archetypes. We also discuss the challenges of automating SD modeling, particularly in scaling to more complex systems, and propose future directions for advancing toward automated modeling and simulation in SD assisted by AI. pdfWeapon Combat Effectiveness Analytics: Integrating Deep Learning and Big Data from Virtual-constructive Simulations Luis Rabelo, Larry Lowe, Won II Jung, Marwen Elkamel, and Gene Lee (University of Central Florida) Program Track: Military and National Security Applications Program Tags: Complex Systems, Data Driven Abstract AbstractThis paper explores the application of deep learning and big data analytics to assess Weapon Combat Effectiveness (WCE) in dynamic combat scenarios. Traditional WCE models rely on simplified assumptions and limited input variables, limiting their realism. To overcome these challenges, datasets are generated from integrated Virtual-Constructive (VC) simulation frameworks, combining strengths of defense modeling, big data, and artificial intelligence. A case study features two opposing forces: a Blue Force (seven F-16 aircraft) and a Red Force (two surface-to-air missile (SAM) units) and a high-value facility. Raw simulation data is processed to extract Measures of Performance (MOPs) to train a convolutional neural network (CNN), to capture nonlinear relationships and estimate mission success probabilities. Results show the model’s resilience to data noise and its usefulness in generating decision-support tools like probability maps. Early results suggest that deep learning integrated with federated VC simulations can significantly enhance fidelity and flexibility of WCE analytics. pdf
3D Vision Based Anti.Collision System for Automatic Load Movements with Tower Cranes - A Simulation Oriented Development Process Alexander Schock-Schmidtke, Gonzalo Bernabé Caparrós, and Johannes Fottner (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tags: Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks Abstract AbstractThis paper presents a simulation-driven development approach for a camera-based anti-collision system designed for automated tower cranes. Stereo camera systems mounted directly on the crane's hook generate real-time 3D point clouds to detect people in the immediate danger zone of suspended loads. A virtual construction site was implemented in a game engine to simulate dynamic scenarios and varying weather conditions. The system utilizes a neural network for pedestrian detection and computes the minimum distance between load and detected persons. A closed-loop architecture enables real-time data exchange between simulation and processing components and allows easy transition to real-world cranes. The system was evaluated under different visibility conditions, showing high detection accuracy in clear weather and degraded performance in fog and rain due to the limitations of stereo vision. The results demonstrate the feasibility of using synthetic environments and point cloud-based perception to develop safety-critical assistance systems in construction automation. pdfA Formal and Deployable Gaming Operation to Defend IT/OT Networks Ranjan Pal, Lillian Bluestein, Tilek Askerbekov, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Military and National Security Applications Program Tags: Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo Abstract AbstractThe cyber vulnerability terrain is largely amplified in critical infrastructure systems (CISs) that attract exploitative (nation-state) adversaries. This terrain is layered over an IT and IoT-driven operational technology (OT) network that supports CIS software applications and underlying protocol communications. Usually, the network is too large for both cyber adversaries and defenders to control every network resource under budget constraints. Hence, both sides strategically want to target 'crown jewels' (i.e., critical network resources) as points of control in the IT/OT network. Going against traditional CIS game theory literature that idealistically (impractically) model attacker-defense interactions, we are the first to formally model real-world adversary-defender strategic interactions in CIS networks as a simultaneous non-cooperative network game with an auction contest success function (CSF) to derive the optimal defender strategy at Nash equilibria. We compare theoretical game insights with those from large-scale Monte Carlo game simulations and propose CIS-managerial cyber defense action items. pdfA Framework for Modeling and Simulation of Multi-dimensional Coupled Socio-Environmental Networked Experiments Vanessa Ines Cedeno (University of Virginia, Escuela Superior Politecnica del Litoral) and Majid Shafiee-Jood (University of Virginia) Program Track: Environment, Sustainability, and Resilience Program Tags: Complex Systems, Conceptual Modeling Abstract AbstractCoupled Socio-Environmental Networked experiments have been used to represent and analyze complex social phenomena and environmental issues. There is a lack of theory on how to accurately model diverse entities and the connections between them across different spatial and temporal scales. This gap often leads to significant challenges in the modeling, simulating, and analysis of formal experiments. We propose a framework that facilitates software implementation of multi-dimensional coupled socio-environmental networked experiments. Our approach includes: (i) a formal data model paired with a computational model, together providing abstract representations, and (ii) a modeling cycle that maps socio-environmental interactions over time, allowing for multi-action, interactive experiments. The framework is flexible, allowing for a wide variety of network models, interactions, and action sets. We demonstrate its applicability through a case study on agroecological transitions, showing how the modeling cycle and data model can be used to explore socio-environmental phenomena. pdfA Novel System Dynamics Approach to DC Microgrid Power Flow Analysis Jose González de Durana (University of the Basque Country) and Luis Rabelo and Marwen Elkamel (University of Central Florida) Program Track: Modeling Methodology Program Tags: AnyLogic, Complex Systems, Conceptual Modeling, System Dynamics Abstract AbstractThis paper employs System Dynamics (SD) to model and analyze DC power distribution systems, focusing on methodological development and using microgrids as case studies. The approach follows a bottom-up methodology, starting with the fundamentals of DC systems and building toward more complex configurations. We coin this approach “Power Dynamics,” which uses stocks and flows to represent electrical components such as resistors, batteries, and power converters. SD offers a time-based, feedback-driven approach that captures component behaviors and system-wide interactions. This framework provides computational efficiency, adaptability, and visualization, enabling the integration of control logic and qualitative decision-making elements. Three case studies of microgrids powered by renewable energy demonstrate the framework’s effectiveness in simulating energy distribution, load balancing, and dynamic power flow. The results highlight SD’s potential as a valuable modeling tool for studying modern energy systems, supporting the design of flexible and resilient infrastructures. pdfA Simulation-enabled Framework for Mission Engineering Problem Definition: Integrating Ai-driven Knowledge Retrieval with Human-centered Design Rafi Soule and Barry C. E (Old Dominion University) Program Track: Modeling Methodology Program Tags: Complex Systems, Conceptual Modeling, Python Abstract AbstractMission Engineering (ME) requires coordination of multiple systems and stakeholders, but often suffers from unclear problem definitions, fragmented knowledge, and limited engagement. This paper proposes a hybrid methodology integrating Retrieval-Augmented Generation (RAG), Human-Centered Design (HCD), and Participatory Design (PD) within a Model-Based Systems Engineering (MBSE) framework. The approach generates context-rich, stakeholder-aligned mission problem statements, as demonstrated in the Spectrum Lab case study, ultimately improving mission effectiveness and stakeholder collaboration. pdfAnalyzing Implementation for Digital Twins: Implications for a Process Model Annika Hesse (TU Dortmund University) and Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Data Driven, Supply Chain Abstract AbstractFor many companies, digital transformation is an important lever for adapting their work and business processes to constant change, keeping them up-to-date and reactive to changes in the global market. Digital twins are seen as a promising means of holistically transforming production systems and value chains, but despite their potential, there has been a lack of standardized implementation processes, often resulting in efficiency losses. Therefore, this paper aims to empirically identify process models for implementing digital twins through a structured literature review and derive implications for a standardized, widely applicable process model. The literature review is based on vom Brocke’s methodology and focuses on scientific articles from the last years. Based on 211 identified publications, relevant papers were analyzed after applying defined exclusion criteria. The results provide fundamental insights into currently used process models and open perspectives for developing a standardized implementation framework for digital twins. pdfClassical and AI-based Explainability of Ontologies on the Example of the Digital Reference – the Semantic Web for Semiconductor and Supply Chains Containing Semiconductors Marta Bonik (Infineon Technologies AG), Eleni Tsaousi (Harokopio University of Athens), Hans Ehm (Infineon Technologies AG), and George Dimitrakopoulos (Harokopio University of Athens) Program Track: MASM: Semiconductor Manufacturing Program Tags: Conceptual Modeling, Python, Supply Chain Abstract AbstractOntologies are essential for structuring knowledge in complex domains like semiconductor supply chains but often remain inaccessible to non-technical users. This paper introduces a combined classical and AI-based approach to improve ontology explainability, using Digital Reference (DR) as a case study. The first approach leverages classical ontology visualization tools, enabling interactive access and feedback for user engagement. The second integrates Neo4j graph databases and Python with a large language model (LLM)-based architecture, facilitating natural language querying of ontologies. A post-processing layer ensures reliable and accurate responses through query syntax validation, ontology schema verification, fallback templates, and entity filtering. The approach is evaluated with natural language queries, demonstrating enhanced usability, robustness, and adaptability. By bridging the gap between traditional query methods and AI-driven interfaces, this work promotes the broader adoption of ontology-driven systems in the Semantic Web and industrial applications, including semiconductor supply chains. pdfConceptual Hybrid Modelling Framework Facilitating Scope 3 Carbon Emissions Evaluation for High Value Manufacturing Okechukwu Okorie, Victoria Omeire, Paul Mativenga, and Maria Sharmina (The University of Manchester) and Peter Hopkinson (The University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, Conceptual Modeling Abstract AbstractExisting manufacturing research on greenhouse gas emissions often focuses on Scope 1 and Scope 2 emissions and underestimates Scope 3 emissions, which are indirect emissions from a firm’s value chains, city and region consumption. Traditional methodologies for evaluating carbon emissions are limited for Scope 3 emissions, due to the complexity of manufacturing supply chains and lack of quality data, leading to incomplete carbon accounting and potential double-counting. This challenge is pronounced for high value manufacturing, an emergent manufacturing perspective, due to the complexity of its supply chain network. This study develops a comprehensive hybrid modeling framework for evaluating Scope 3 emissions at product level, useful for manufacturers and modelers. pdfDigital Twin to Mitigate Adverse Addictive Gambling Behavior Felisa Vazquez-Abad and Jason Young (Hunter College CUNY) and Silvano A Bernabel (Graduate Center CUNY) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis work develops a simulation engine to create a digital twin that will monitor a gambler’s betting behavior when playing games. The digital twin is designed to perform simulations to compare outcomes of different betting strategies, under various assumptions on the psychological profile of the player. With these simulations it then produces recommendations to the player aimed at mitigating adverse outcomes. Our work focuses on efficient simulation and the creation of the corresponding GUI that will become the interface between the player and the digital twin. pdfGoal-oriented Generation of Simulation Experiments Anja Wolpers, Pia Wilsdorf, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Conceptual Modeling, Java, Validation Abstract AbstractAutomatically generating and executing simulation experiments promises to make running simulation
studies more efficient, less error-prone, and easier to document and replicate. However, during experiment
generation, background knowledge is required regarding which experiments using which inputs and outputs
are useful to the modeler. Therefore, we conducted an interview study to identify what types of experiments
modelers perform during simulation studies. From the interview results, we defined four general goals
for simulation experiments: exploration, confirmation, answering the research question, and presentation.
Based on the goals, we outline and demonstrate an approach for automatically generating experiments by
utilizing an explicit and thoroughly detailed conceptual model. pdfInfluence of Norms in Alliance Characteristics of Humanitarian Food Agencies: Capability, Compatibility and Satisfaction Naimur Rahman Chowdhury and Rashik Intisar Siddiquee (North Carolina State University) and Julie Simmons Ivy (University of Michigan) Program Track: Environment, Sustainability, and Resilience Program Tags: Conceptual Modeling, Python, Resiliency Abstract AbstractHunger relief networks consist of agencies that work as independent partners within a food bank network. For these networks to effectively and efficiently reduce food insecurity, strategic alliances between agencies are crucial. Agency preference for forming alliances with other agencies can impact network structure and network satisfaction. In this paper, we explore the compatibility and satisfaction achieved by alliances between different agencies. We introduce two agency norms: conservative and diversifying. We develop an agent-based simulation model to investigate alliance formation in a network. We evaluate network satisfaction, satisfaction among different types of agencies, and alliance heterogeneity. We test the statistical significance of satisfaction within a norm and between norms for different agencies. Findings reveal that the ‘diversifying’ norm in the network reduces gaps in satisfaction between strong and weak agencies, ensuring fairness for weaker agencies in the network, whereas the ‘conservative’ norm favors moderate agencies in the network. pdfLeveraging International Collaboration for Interactive Lunar Simulations: An Educational Experience From See 2025 Kaique Govani, Andrea Lucia Braga, José Lucas Fogaça Aguiar, Giulia Oliveira, Andressa Braga, Rafael Henrique Ramos, Fabricio Torquato Leite, and Patrick Augusto Pinheiro Silva (FACENS) Program Track: Simulation in Space Program Tags: Conceptual Modeling, Distributed, Java Abstract AbstractThis paper presents an educational experience from the Simulation Exploration Experience (SEE) 2025, focusing on leveraging international collaboration to develop interactive lunar simulations. Specifically, the FACENS team created two interoperable simulation federates, a Lunar Cable Car system and an Astronaut system, using Java, Blender and the SEE Starter Kit Framework (SKF). Putting emphasis on the educational and collaborative aspects of SEE, our primary objectives included developing robust real-time interactions with international teams, improving simulation visuals, and improving astronaut behavior and logic using optimized path‑finding algorithms. Seamless interoperability was demonstrated with federates developed by Brunel University and Florida Polytechnic University. Our experiences and lessons learned provide valuable insights for future teams engaged in distributed simulation development and international collaborative projects in the space exploration domain. pdfOptimization of Operations in Solid Bulk Port Terminals using Digital Twin JACKELINE DEL CARMEN HUACCHA NEYRA and Lorrany Cristina da Silva (GENOA), João Ferreira Netto (University of Sao Paulo), and Afonso Celso Medina (GENOA) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis article presents the development of a Digital Twin (DT)-based tool for optimizing scheduling in solid bulk export port terminals. The approach integrates agent-based simulation with the Ant Colony System (ACS) metaheuristic to efficiently plan railway unloading, stockyard storage, and maritime shipping. The model interacts with operational data, anticipating issues and aiding decision-making. Validation was performed using real data from a port terminal in Brazil, yielding compatible results and reducing port stay duration. Tests were based on a Baseline Scenario, aligned with a mineral export terminal, for ACS parameter calibration, along with three additional scenarios: direct shipment, preventive maintenance, and a simultaneous route from stockyard to ships. The study highlights DT’s potential to modernize port operations, offering practical support in large-scale logistics environments. pdfReinforcement Learning in Production Planning and Control: a Review on State, Action and Reward Design in Order Release and Production Scheduling Patrick Farwick (University of Applied Science Bielefeld) and Christian Schwede (University of Applied Science Bielefeld, Fraunhofer Institute of Software and Systems Engineering) Program Track: Manufacturing and Industry 4.0 Program Tag: Conceptual Modeling Abstract AbstractProduction Planning and Control (PPC) faces increasing complexity due to volatile demand, high product variety, and dynamic shop floor conditions. Reinforcement Learning (RL) offers adaptive decision-making capabilities to address these challenges. RL often relies on simulation environments for the intensive training, allowing for short run times during execution. This paper reviews existing literature to examine how RL agents are modeled in terms of state space, action space, and reward function, focusing on order release and related production scheduling tasks. The findings reveal considerable variation in modeling approaches and a lack of theoretical guidance, particularly in reward design and feature selection. pdfToward Automating System Dynamics Modeling: Evaluating LLMs in the Transition from Narratives to Formal Structures Jhon G. Botello (Virginia Modeling, Analysis, and Simulation Center) and Brian Llinas, Jose Padilla, and Erika Frydenlund (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Conceptual Modeling, Output Analysis, System Dynamics Abstract AbstractTransitioning from narratives to formal system dynamics (SD) models is a complex task that involves identifying variables, their interconnections, feedback loops, and the dynamic behaviors they exhibit. This paper investigates how large language models (LLMs), specifically GPT-4o, can support this process by bridging narratives and formal SD structures. We compare zero-shot prompting with chain-of-thought (CoT) iterations using three case studies based on well-known system archetypes. We evaluate the LLM’s ability to identify the systemic structures, variables, causal links, polarities, and feedback loop patterns. We present both quantitative and qualitative assessments of the results. Our study demonstrates the potential of guided reasoning to improve the transition from narratives to system archetypes. We also discuss the challenges of automating SD modeling, particularly in scaling to more complex systems, and propose future directions for advancing toward automated modeling and simulation in SD assisted by AI. pdfUtilization of Virtual Commissioning for Simulation-Based Energy Modeling and Dimensioning of DC-Based Production Systems Martin Barth (Friedrich-Alexander-Universität Erlangen Nürnberg); Philipp Herkel (Friedrich-Alexander-Universität Erlangen-Nürnberg,); Benjamin Gutwald and Jan Hinrich Krüger (Friedrich-Alexander-Universität Erlangen-Nürnberg); Tobias Schrage (Technische Hochschule Ingolstadt); and Tobias Reichenstein and Jörg Franke (Friedrich-Alexander-Universität Erlangen-Nürnberg) Program Track: Environment, Sustainability, and Resilience Program Tag: Conceptual Modeling Abstract AbstractThe transition toward DC-based industrial grids demands accurate yet efficient planning methods. This paper presents a simulation-driven approach that integrates existing virtual commissioning (VC) models to estimate power demand in early design stages. A minimal-parameter modeling technique is proposed to extract dynamic load profiles from 3D multibody simulations, which are then used for electrical component sizing and system optimization. The methodology is validated using a demonstrator setup consisting of a lift tower and a six-axis industrial robot, both tested under AC and DC operation. Simulation results show close agreement with real measurements, highlighting the method’s ability to capture realistic load behavior with low modeling effort. The approach offers seamless integration into existing planning workflows and supports dimensioning of safe, efficient, and regulation-compliant DC production systems. This contributes to reducing component oversizing, improving energy efficiency, and accelerating the adoption of DC grid architectures in industrial environments. pdf
3D Vision Based Anti.Collision System for Automatic Load Movements with Tower Cranes - A Simulation Oriented Development Process Alexander Schock-Schmidtke, Gonzalo Bernabé Caparrós, and Johannes Fottner (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tags: Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks Abstract AbstractThis paper presents a simulation-driven development approach for a camera-based anti-collision system designed for automated tower cranes. Stereo camera systems mounted directly on the crane's hook generate real-time 3D point clouds to detect people in the immediate danger zone of suspended loads. A virtual construction site was implemented in a game engine to simulate dynamic scenarios and varying weather conditions. The system utilizes a neural network for pedestrian detection and computes the minimum distance between load and detected persons. A closed-loop architecture enables real-time data exchange between simulation and processing components and allows easy transition to real-world cranes. The system was evaluated under different visibility conditions, showing high detection accuracy in clear weather and degraded performance in fog and rain due to the limitations of stereo vision. The results demonstrate the feasibility of using synthetic environments and point cloud-based perception to develop safety-critical assistance systems in construction automation. pdfA Method for Fmi and Devs for Co-simulation Ritvik Joshi (Carleton University, Blackberry QNX); James Nutaro (Oak Ridge National Lab); Gabriel Wainer (Carleton University); and Bernard Zeigler and Doohwan Kim (RTSync Corp) Program Track: Modeling Methodology Program Tags: Cyber-Physical Systems, DEVS Abstract AbstractThe need for standardized exchange of dynamic models led to the Functional Mockup Interface (FMI), which facilitates model exchange and co-simulation across multiple tools. Integration of this standard with modeling and simulation formalism enhances interoperability and provides opportunities for collaboration. This research presents an approach for the integration of FMI and Discrete Event System Specification (DEVS). DEVS provides the modularity required for seamlessly integrating the shared model. We proposed a framework for exporting and co-simulating DEVS models as well as for importing and co-simulating continuous-time models using the FMI standard. We present a case study that shows the use of this framework to simulate the steering system of an Unmanned Ground Vehicle (UGV). pdfEVIMAS - Digital Twin-Based Electric Vehicle Infrastructure Modeling And Analytics System Aparna Kishore, Kazi Ashik Islam, and Madhav Marathe (University of Virginia, Biocomplexity Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Python Abstract AbstractThe growing shift to electric vehicles (EVs) presents significant challenges due to the complexities in spatial, temporal, and behavioral aspects of adoption and infrastructure development. To address these challenges, we present the EV Infrastructure Modeling and Analytics System (EVIMAS), a modular and extensible software system built using microservices principles. The system comprises three loosely coupled components: (i) a data processing pipeline that constructs a static digital model using diverse inputs, (ii) a modeling and simulation pipeline for simulating dynamic, multi-layered interactions, and (iii) an analytics pipeline that supports task execution and the analysis of results. We demonstrate the utility of the EVIMAS via three case studies. Our studies show that such analysis can be done efficiently under varying constraints and objectives, including geographic regions, analytical goals, and input configurations. EVIMAS supports fine-grained, agent-based EV simulations, facilitating the integration of new components, data, and models for EV infrastructure development. pdfExplainability in Digital Twins: Overview and Challenges Meryem Mahmoud (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc-Kinney Moller Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Data Driven Abstract AbstractDigital Twins are increasingly being adopted across industries to support decision-making, optimization, and real-time monitoring. As these systems and, correspondingly, the underlying models of their corresponding Digital Twins, grow in complexity, there is a need to enhance explainability at several points in the Digital Twins. This is especially true for safety-critical systems and applications that require Human-in-the-Loop interactions. Ensuring explainability in both the underlying simulation models and the related decision-support mechanisms is key to trust, adoption, and informed decision-making. While explainability has been extensively explored in the context of machine learning models, its role in simulation-based Digital Twins remains less examined. In this paper, we review the current state of the art on explainability in simulation-based Digital Twins, highlighting key challenges, existing approaches, and open research questions. Our goal is to establish a foundation for future research and development, enabling more transparent, trustworthy, and effective Digital Twins. pdfReview and Classification of Challenges in Digital Twin Implementation for Simulation-Based Industrial Applications Alexander Wuttke (TU Dortmund University), Bhakti Stephan Onggo (University of Southampton), and Markus Rabe (TU Dortmund University) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems Abstract AbstractDigital Twins (DTs) play an increasingly important role in connecting physical objects to their digital counterparts, with simulation playing a key role in deriving valuable insights. Despite their potential, DT implementation remains complex and adoption in industrial operations is limited. This paper investigates the challenges of DT implementation in simulation-based industrial applications through systematically reviewing 124 publications from 2021 to 2024. The findings reveal that while nearly half of the publications tested prototypes, most are limited to laboratory settings and lack critical features such as cybersecurity or real-time capabilities. Discrete Event Simulation and numerical simulation emerge as the dominantly utilized simulation techniques in DTs. From the analysis, 33 challenges are identified and the classification of them into nine dimensions is proposed. Finally, further research opportunities are outlined. pdfReview of Digital Technologies for the Circular Economy and the Role of Simulation Julia Kunert, Alexander Wuttke, and Hendrik van der Valk (TU Dortmund University) Program Track: Simulation as Digital Twin Program Tag: Cyber-Physical Systems Abstract AbstractThe Circular Economy (CE) is essential for achieving sustainability, with digital technologies serving as key enablers for its adoption. However, many businesses lack knowledge about these technologies and their applications. This paper conducts a structured literature review (SLR) to identify the digital technologies proposed in recent literature for CE, their functions, and current real-world use cases. Special attention is given to simulation, which is considered a valuable digital technology for advancing CE. The analysis identifies the digital technologies Artificial Intelligence, the Internet of Things, Blockchain, simulation, cyber-physical systems, data analytics, Digital Twins, robotics, and Extended Reality. They are used for waste sorting and production automation, disassembly assistance, demand analysis, data traceability, energy and resource monitoring, environmental impact assessment, product design improvement, condition monitoring, predictive maintenance, process improvement, product design assessment, and immersive training. We discuss the findings in detail and suggest paths for further research. pdf
A Baseline Simulation of Hybrid Misinformation and Spearphishing Campaigns in Organizational Networks Jeongkeun Shin, Han Wang, L. Richard Carley, and Kathleen M. Carley (Carnegie Mellon University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Cybersecurity Abstract AbstractThis study presents an agent-based simulation that examines how pre-attack misinformation amplifies the effectiveness of spearphishing campaigns within organizations. A virtual organization of 235 end user agents is modeled, each assigned unique human factors such as Big Five personality traits, fatigue, and job performance, derived from empirical data. Misinformation is disseminated through Facebook, where agents determine whether to believe and spread false content using regression models from prior psychological studies. When agents believe misinformation, their psychological and organizational states degrade to simulate a worst-case scenario. These changes increase susceptibility to phishing emails by impairing security-related decision-making. Informal relationship networks are constructed based on extraversion scores, and network density is varied to analyze its effect on misinformation spread. The results demonstrate that misinformation significantly amplifies organizational vulnerability by weakening individual and collective cybersecurity-relevant decision-making, emphasizing the critical need to account for human cognitive factors in future cybersecurity strategies. pdfA Formal and Deployable Gaming Operation to Defend IT/OT Networks Ranjan Pal, Lillian Bluestein, Tilek Askerbekov, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Military and National Security Applications Program Tags: Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo Abstract AbstractThe cyber vulnerability terrain is largely amplified in critical infrastructure systems (CISs) that attract exploitative (nation-state) adversaries. This terrain is layered over an IT and IoT-driven operational technology (OT) network that supports CIS software applications and underlying protocol communications. Usually, the network is too large for both cyber adversaries and defenders to control every network resource under budget constraints. Hence, both sides strategically want to target 'crown jewels' (i.e., critical network resources) as points of control in the IT/OT network. Going against traditional CIS game theory literature that idealistically (impractically) model attacker-defense interactions, we are the first to formally model real-world adversary-defender strategic interactions in CIS networks as a simultaneous non-cooperative network game with an auction contest success function (CSF) to derive the optimal defender strategy at Nash equilibria. We compare theoretical game insights with those from large-scale Monte Carlo game simulations and propose CIS-managerial cyber defense action items. pdfAI on Small and Noisy Data is Ineffective For ICS Cyber Risk Management Best Contributed Theoretical Paper - Finalist Yaphet Lemiesa, Ranjan Pal, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Simulation and Artificial Intelligence Program Tags: Cybersecurity, Data Driven, Monte Carlo Abstract AbstractModern industrial control systems (ICSs) are increasingly relying upon IoT and CPS technology to improve cost-effective service performance at scale. Consequently, the cyber vulnerability terrain is largely amplified in ICSs. Unfortunately, the historical lack of (a) sufficient, non-noisy ICS cyber incident data, and (b) intelligent operational business processes to collect and analyze available ICS cyber incident data, demands the attention of the Bayesian AI community to develop cyber risk management (CRM) tools to address these challenges. In this paper we show with sufficient Monte Carlo simulation evidence that Bayesian AI on noisy (and small) ICS cyber incident data is ineffective for CRM. More specifically, we show via a novel graphical sensitivity analysis methodology that even small amounts of statistical noise in cyber incident data are sufficient to reduce ICS intrusion/anomaly detection performance by a significant percentage. Hence, ICS management processes should strive to collect sufficient non-noisy cyber incident data. pdfExpert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection David Thomas Farr (University of Washington), Kevin Talty (United States Army), Alexandra Farr (Microsoft), John Stockdale (U.S. Army), Iain Cruickshank (Carnegie Mellon University), and Jevin West (University of Washington) Program Track: Military and National Security Applications Program Tags: Cybersecurity, Python Abstract AbstractAs cyber threats become more sophisticated, rapid and accurate vulnerability detection is essential for maintaining secure systems. This study explores the use of Large Language Models in software vulnerability assessment by simulating the identification of Python code with known Common Weakness Enumerations (CWEs), comparing zero-shot, few-shot cross-domain, and few-shot in-domain prompting strategies. Our results indicate that few-shot prompting significantly enhances classification performance, particularly when integrated with confidence-based routing strategies that improve efficiency by directing human experts to cases where model uncertainty is high.
We find that LLMs can effectively generalize across vulnerability categories with minimal examples, suggesting their potential as scalable, adaptable cybersecurity tools in simulated environments. By integrating AI-driven approaches with expert-in-the-loop (EITL) decision-making, this work highlights a pathway toward more efficient and responsive cybersecurity workflows. Our findings provide a foundation for deploying AI-assisted vulnerability detection systems that enhance resilience while reducing the burden on human analysts. pdf
Beyond Co-authorship: Discovering Novel Collaborators With Multilayer Random-Walk-Based Simulation in Academic Networks Best Contributed Theoretical Paper - Finalist Siyu Chen, Keng Hou Leong, and Jiadong Liu (Tsinghua University); Wei Chen (Tsinghua University, Tencent Technology (Shenzhen) Co. LTD.); and Wai Kin Chan (Tsinghua University) Program Track: Data Science and Simulation Program Tags: Data Analytics, Python Abstract AbstractAcademic collaboration is vital for enhancing research impact and interdisciplinary exploration, yet finding suitable collaborators remains challenging. Conventional single-layer random walk methods often struggle with the heterogeneity of academic networks and limited recommendation novelty. To overcome these limitations, we propose a novel Multilayer Random Walk simulation framework (MLRW) that simulates scholarly interactions across cooperation, institutional affiliation, and conference attendance, enabling inter-layer transitions to capture multifaceted scholarly relationships. Tested on the large-scale SciSciNet dataset, our MLRW simulation framework significantly outperforms conventional random walk methods in accuracy and novelty, successfully identifying potential collaborators beyond immediate co-authorship. Our analysis further confirms the significance of institutional affiliation as a collaborative predictor, validating its inclusion. This research contributes a more comprehensive simulation approach to scholar recommendations, enhancing the discovery of latent practical collaborations. Future research will focus on integrating additional interaction dimensions and optimizing weighting strategies to further improve diversity and relevance. pdfExploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfImpact of Battery Electric Trucks on Intermodal Freight Transportation - An Agent-based Simulation Study Eric Reynolds (Motlow State Community College), Nasim Nezamoddini (Oakland University), and Mustafa Can Camur and Xueping Li (University of Tennessee) Program Track: Agent-based Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractThis paper applies an agent-based simulation model to examine the feasibility of battery electric trucks (BETs) in intermodal freight transportation, focusing on the Memphis hub network. Two infrastructure deployment stages, depot charging only and depot plus destination charging, are modeled and simulated using AnyLogic platform to study truck utilization patterns. Real-world manufacturing sites are chosen, and the trucks are routed along roadways using a Geographic Information System (GIS) map. Battery charge levels and charging infrastructure are modeled under both scenarios. Four electric truck models from various manufacturers including Tesla Semi, Nikola Tre, Volvo VNR, and Freightliner eCascadia are compared in terms of performance and utilization. Results showed that battery electric trucks are a feasible solution for intermodal trucking operations and transporting goods from manufacturers to destinations. This comparison also highlights effects of changing shifts and adding opportunity charging at destinations on truck utilization under different battery efficiencies and capacities. pdfLeveraging OpenStreetMap Information to Identify Cluster Centers in Aggregated Movement Data Maylin Wartenberg and Luca Marco Heitmann (Hochschule Hannover) and Marvin Auf der Landwehr (FehrAdvice & Partners AG) Program Track: Data Science and Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractAggregated movement data is widely used for traffic simulations, but privacy constraints often limit data granularity, requiring the use of centroids as cluster representatives. However, centroids might locate cluster centers in contextually irrelevant areas, such as an open field, leading to inaccuracies. This paper introduces a method that leverages an aggregation of points of interest (POIs) such as bus stops or buildings from OpenStreetMap as cluster centers. Using trip data from a suburban region in Germany, we evaluate the spatial deviation between centroids, POIs, and real trip origins and destinations. Our findings show that POI-based centers reduce spatial deviation by up to 46% compared to centroids, with the greatest improvements in rural areas. Furthermore, in an agent-based mobility simulation, POI-based centers significantly reduced travel distances. These results demonstrate that POI-based centers offer a context-aware alternative to centroids, with significant implications for mobility modeling, urban planning, and traffic management. pdfMapping Applications of Computer Simulation in Orthopedic Services: A Topic Modeling Approach Alison L. Harper, Thomas Monks, Navonil Mustafee, and Jonathan T. Evans (University of Exeter) and Al-Amin Kassam (Royal Devon University Healthcare) Program Track: Healthcare and Life Sciences Program Tags: Data Analytics, Open Source Abstract AbstractOrthopedic health services are characterized by high patient volumes, long elective waits, unpredictable emergency demand, and close coupling with other hospital processes. These present significant challenges for meeting operational targets and maintaining quality of care. In healthcare, simulation has been widely used for addressing similar challenges. This systematic scoping review identifies and analyzes academic papers using simulation to address operational-level challenges for orthopedic service delivery. We analyzed 37 studies over two decades, combining a structured analysis with topic modelling to categorize and map applications. Despite widespread recognition of its potential, simulation remains underutilized in orthopedics, with fragmented application and limited real-world implementation. Recent trends indicate a shift toward system-wide approaches that better align with operational realities and stakeholder needs. Future research should aim to bridge methodological innovation with collaboration and practical application, such as hybrid and real-time simulation approaches focusing on stakeholder needs, and integrating relevant operational performance metrics. pdfPerformance of LLMs on Stochastic Modeling Operations Research Problems: From Theory to Practice Yuhang Wu, Akshit Kumar, Tianyi Peng, and Assaf Zeevi (Columbia University) Program Track: Simulation and Artificial Intelligence Program Tag: Data Analytics Abstract AbstractLarge language models (LLMs) have exhibited capabilities comparable to those of human experts in various fields. However, their modeling abilities—the process of converting real-life problems (or their verbal descriptions) into sensible mathematical models—have been underexplored. In this work, we take the first step to evaluate LLMs’ abilities to solve stochastic modeling problems, a model class at the core of Operations Research (OR) and decision-making more broadly. We manually procure a representative set of graduate-level homework and doctoral qualification-exam problems and test LLMs’ abilities to solve them. We further leverage SimOpt, an open-source library of simulation-optimization problems and solvers, to investigate LLMs’ abilities to make real-world decisions. Our results show that, though a nontrivial amount of work is still needed to reliably automate the stochastic modeling pipeline in reality, state-of-the-art LLMs demonstrate proficiency on par with human experts in both classroom and practical settings. pdfProbabilistic Isochrone Analysis in Military Ground Movement: Multi-Method Synergy for Adaptive Models of the Future Alexander Roman and Oliver Rose (Universität der Bundeswehr München) Program Track: Military and National Security Applications Program Tags: Data Analytics, Python Abstract AbstractTimely and accurate prediction of adversarial unit movements is a critical capability in military operations, yet traditional methods often lack the granularity or adaptability to deal with sparse, uncertain data. This paper presents a probabilistic isochrone (PI) framework to estimate future positions of military units based on sparse reconnaissance reports. The approach constructs a continuous probability density function of movement distances and derives gradient prediction areas. Validation is conducted using real-world data from the 2022 Russian invasion of Ukraine, evaluating both the inclusion of actual future positions within the predicted rings and the root mean-squared error of our method. Results show that the method yields reliable spatial uncertainty bounds and offers interpretable predictive insights. This PI approach complements existing isochrone mapping and adversarial modeling systems and demonstrates a novel fusion of simulation, spatial analytics, and uncertainty quantification in military decision support. Future work will integrate simulation to enhance predictive fidelity. pdfSales Planning Using Data Farming in Trading Networks Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) and Alexander Wuttke, Markus Rabe, Hendrik van der Valk, and Mario di Benedetto (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Data Analytics, Python Abstract AbstractVolatile customer demand poses a significant challenge for the logistics networks of trading companies. To mitigate the uncertainty in future customer demand, many products are produced to stock with the goal to be able to meet the customers’ expectations. To adequately manage their product inventory, demand forecasting is a major concern in the companies’ sales planning. A promising approach besides using observational data as an input for the forecasting methods is simulation-based data generation, called data farming. In this paper, purposeful data generation and large-scale experiments are applied to generate input data for predicting customer demand in sales planning of a trading company. An approach is presented for using data farming in combination with established forecasting methods such as random forests. The application is discussed on a real-world use case, highlighting benefits of the chosen approach, and providing useful and value-adding insights to motivate further research. pdf
3D Vision Based Anti.Collision System for Automatic Load Movements with Tower Cranes - A Simulation Oriented Development Process Alexander Schock-Schmidtke, Gonzalo Bernabé Caparrós, and Johannes Fottner (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tags: Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks Abstract AbstractThis paper presents a simulation-driven development approach for a camera-based anti-collision system designed for automated tower cranes. Stereo camera systems mounted directly on the crane's hook generate real-time 3D point clouds to detect people in the immediate danger zone of suspended loads. A virtual construction site was implemented in a game engine to simulate dynamic scenarios and varying weather conditions. The system utilizes a neural network for pedestrian detection and computes the minimum distance between load and detected persons. A closed-loop architecture enables real-time data exchange between simulation and processing components and allows easy transition to real-world cranes. The system was evaluated under different visibility conditions, showing high detection accuracy in clear weather and degraded performance in fog and rain due to the limitations of stereo vision. The results demonstrate the feasibility of using synthetic environments and point cloud-based perception to develop safety-critical assistance systems in construction automation. pdfA Data-Driven Discretized CS:GO Simulation Environment to Facilitate Strategic Multi-Agent Planning Research Yunzhe Wang (University of Southern California, USC Institute for Creative Technologies); Volkan Ustun (USC Institute for Creative Technologies); and Chris McGroarty (U.S. Army Combat Capabilities Development Command - Soldier Center) Program Track: Simulation and Artificial Intelligence Program Tag: Data Driven Abstract AbstractModern simulation environments for complex multi-agent interactions must balance high-fidelity detail with computational efficiency. We present DECOY, a novel multi-agent simulator that abstracts strategic, long-horizon planning in 3D terrains into high-level discretized simulation while preserving low-level environmental fidelity. Using Counter-Strike: Global Offensive (CS:GO) as a testbed, our framework accurately simulates gameplay using only movement decisions as tactical positioning—without explicitly modeling low-level mechanics such as aiming and shooting. Central to our approach is a waypoint system that simplifies and discretizes continuous states and actions, paired with neural predictive and generative models trained on real CS:GO tournament data to reconstruct event outcomes. Extensive evaluations show that replays generated from human data in DECOY closely match those observed in the original game. Our publicly available simulation environment provides a valuable tool for advancing research in strategic multi-agent planning and behavior generation. pdfA Reinforcement Learning-Based Discrete Event Simulation Approach For Streamlining Job-Shop Production Line Under Uncertainty Jia-Min Chen, Bimal Nepal, and Amarnath Banerjee (Texas A&M University) Program Track: Manufacturing and Industry 4.0 Program Tags: Data Driven, FlexSim Abstract AbstractStreamlining the order release strategy for a job-shop production system under uncertainty is a complex problem. The system is likely to have a number of stochastic parameters contributing to the problem complexity. These factors make it challenging to develop optimal job-shop schedules. This paper presents a Reinforcement Learning-based discrete-event simulation approach that streamlines the policy for releasing orders in a job-shop production line under uncertainty. A digital twin (DT) was developed to simulate the job-shop production line, which facilitated the collection of process and equipment data. A reinforcement learning algorithm is connected to the DT environment and trains with the previously collected data. Once the training is complete, its solution is evaluated in the DT using experimental runs. The method is compared with a few popular heuristic-based rules. The experimental results show that the proposed method is effective in streamlining the order release in a job-shop production system with uncertainty. pdfAI on Small and Noisy Data is Ineffective For ICS Cyber Risk Management Best Contributed Theoretical Paper - Finalist Yaphet Lemiesa, Ranjan Pal, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Simulation and Artificial Intelligence Program Tags: Cybersecurity, Data Driven, Monte Carlo Abstract AbstractModern industrial control systems (ICSs) are increasingly relying upon IoT and CPS technology to improve cost-effective service performance at scale. Consequently, the cyber vulnerability terrain is largely amplified in ICSs. Unfortunately, the historical lack of (a) sufficient, non-noisy ICS cyber incident data, and (b) intelligent operational business processes to collect and analyze available ICS cyber incident data, demands the attention of the Bayesian AI community to develop cyber risk management (CRM) tools to address these challenges. In this paper we show with sufficient Monte Carlo simulation evidence that Bayesian AI on noisy (and small) ICS cyber incident data is ineffective for CRM. More specifically, we show via a novel graphical sensitivity analysis methodology that even small amounts of statistical noise in cyber incident data are sufficient to reduce ICS intrusion/anomaly detection performance by a significant percentage. Hence, ICS management processes should strive to collect sufficient non-noisy cyber incident data. pdfAURORA: Enhancing Synthetic Population Realism Through RAG and Salience-Aware Opinion Modeling rebecca marigliano and Kathleen Carley (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tags: Data Driven, Input Modeling, Python Abstract AbstractSimulating realistic populations for strategic influence and social-cyber modeling requires agents that are demographically grounded, emotionally expressive, and contextually coherent. Existing agent-based models often fail to capture the psychological and ideological diversity found in real-world populations. This paper introduces AURORA, a Retrieval-Augmented Generation (RAG)-enhanced framework that leverages large language models (LLMs), semantic vector search, and salience-aware topic modeling to construct synthetic communities and personas. We compare two opinion modeling strategies and evaluate three LLMs—gemini-2.0-flash, deepseek-chat, and gpt-4o-mini—in generating emotionally and ideologically varied agents. Results show that community-guided strategies improve meso-level opinion realism, and LLM selection significantly affects persona traits and emotions. These findings demonstrate that principled LLM integration and salience-aware modeling can enhance the realism and strategic utility of synthetic populations for simulating narrative diffusion, belief change, and social response in complex information environments. pdfAnalyzing Implementation for Digital Twins: Implications for a Process Model Annika Hesse (TU Dortmund University) and Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Data Driven, Supply Chain Abstract AbstractFor many companies, digital transformation is an important lever for adapting their work and business processes to constant change, keeping them up-to-date and reactive to changes in the global market. Digital twins are seen as a promising means of holistically transforming production systems and value chains, but despite their potential, there has been a lack of standardized implementation processes, often resulting in efficiency losses. Therefore, this paper aims to empirically identify process models for implementing digital twins through a structured literature review and derive implications for a standardized, widely applicable process model. The literature review is based on vom Brocke’s methodology and focuses on scientific articles from the last years. Based on 211 identified publications, relevant papers were analyzed after applying defined exclusion criteria. The results provide fundamental insights into currently used process models and open perspectives for developing a standardized implementation framework for digital twins. pdfAutomated Business Process Simulation Studies: Where do Humans Fit In? Samira Khraiwesh and Luise Pufahl (Technical University of Munich) Program Track: Data Science and Simulation Program Tags: Data Driven, Process Mining Abstract AbstractBusiness Process Simulation (BPS) is crucial for enhancing organizational efficiency and decision-making, enabling organizations to test process changes in a virtual environment without real-world consequences. Despite advancements in automatic simulation model discovery using process mining, BPS is still underused due to challenges in accuracy. Human-in-the-Loop (HITL) integrates human expertise into automated systems, where humans guide, validate, or intervene in the automation process to ensure accuracy and context. This paper introduces a framework identifying key stages in BPS studies where HITL can be applied and the factors influencing the degree of human involvement. The framework is based on a literature review and expert interviews, providing valuable insights and implications for researchers and practitioners. pdfBalancing Airport Grid Load: The Role of Smart EV Charging, Solar, and Batteries Primoz Godec and Steve McKeever (Uppsala University) Program Track: Environment, Sustainability, and Resilience Program Tag: Data Driven Abstract AbstractAs the aviation industry works to reduce carbon emissions, airport energy optimization has also been brought into focus. This study explores strategies to reduce peak electricity demand at a major Swedish airport driven by increased electric vehicle charging (EV). EV charging increases grid load, but integrating solar power and battery storage helps stabilize fluctuations and reduce peaks. We present a framework and simulate the combined impact of these factors, demonstrating that smart scheduling with solar and battery systems effectively balances the load. This approach reduces high-load occurrences from 8.6\% to 2.5\%---where 100\% would mean exceeding the threshold year-round---even with 500 additional charging points. pdfData Requirements for Reliability-Oriented Digital Twins of Energy Systems: A Case Study Analysis Omar Mostafa (Karlsruhe Institute of Technology (KIT)) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology (KIT), University of Southern Denmark (SDU)) Program Track: Reliability Modeling and Simulation Program Tag: Data Driven Abstract AbstractEnsuring reliability of energy systems is critical for maintaining a secure and adequate energy supply, especially as the integration of renewable energy increases systems’ complexity and variability. Digital Twins offer a promising approach for data-driven reliability assessment and decision support in energy systems. Digital Twins provide decision support by dynamically modeling and analyzing system reliability using real-time data to create a digital replica of the physical counterpart. As modern energy systems generate vast amounts of data, it is essential to precisely define the data required for enabling Digital Twins for their reliability assessment. In this paper, we systematically investigate the data requirements for reliability-oriented Digital Twins for energy systems and propose a structured categorization of these requirements. To illustrate our findings, we present a case study demonstrating the link between data and model extraction for enhancing system reliability. pdfData-driven Digital Twin for the Predictive Maintenance of Business Processes Paolo Bocciarelli and Andrea D'Ambrogio (University of Rome Tor Vergata) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Metamodeling, Process Mining Abstract AbstractThis paper presents a data-driven framework for the predictive maintenance of Business Processes based on the Digital Twin paradigm. The proposed approach integrates process mining techniques and a low-code develop approach to build reliability-aware simulation models from systems logs. These models are used to automatically generate executable DTs capable of predicting resource failures and estimating the Remaining Useful Life (RUL) of system components. The predictions are then exploited to trigger preventive actions or automated reconfigurations. The framework is implemented using the PyBPMN/eBPMN framework and evaluated on a manufacturing case study. Results show that the DT enables timely interventions, minimizes system downtimes, and ensures process continuity. pdfDiscrete Event Simulation for Assessing the Impact of Bus Fleet Electrification on Service Reliability Best Contributed Applied Paper - Finalist Minjie Xia, Wenying Ji, and Jie Xu (George Mason University) Program Track: Project Management and Construction Program Tags: Complex Systems, Data Driven, Python, Resiliency Abstract AbstractThis paper aims to derive a simulation model to evaluate the impact of bus fleet electrification on service reliability. At its core, the model features a micro discrete event simulation (DES) of an urban bus network, integrating a route-level bus operation module and a stop-level passenger travel behavior module. Key reliability indicators—bus headway deviation ratio, excess passenger waiting time, and abandonment rate—are computed to assess how varying levels of electrification influence service reliability. A case study of route 35 operated by DASH in Alexandria, VA, USA is conducted to demonstrate the applicability and interpretability of the developed DES model. The results reveal trade-offs between bus fleet electrification and service reliability, highlighting the role of operational constraints and characteristics of electric buses (EBs). This research provides transit agencies with a data-driven tool for evaluating electrification strategies while maintaining reliable and passenger-centered service. pdfDistributionally Robust Logistic Regression with Missing Data Weicong Chen and Hoda Bidkhori (George Mason University) Program Track: Uncertainty Quantification and Robust Simulation Program Tags: Data Driven, Python Abstract AbstractMissing data presents a persistent challenge in machine learning. Conventional approaches often rely on data imputation followed by standard learning procedures, typically overlooking the uncertainty introduced by the imputation process. This paper introduces Imputation-based Distributionally Robust Logistic Regression (I-DRLR)—a novel framework that integrates data imputation with class-conditional Distributionally Robust Optimization (DRO) under the Wasserstein distance. I-DRLR explicitly models distributional ambiguity in the imputed data and seeks to minimize the worst-case logistic loss over the resulting uncertainty set. We derive a convex reformulation to enable tractable optimization and evaluate the method on the Breast Cancer and Heart Disease datasets from the UCI Repository. Experimental results demonstrate consistent improvements for out-of-sample performance in both prediction accuracy and ROC-AUC, outperforming traditional methods that treat imputed data as fully reliable. pdfEvaluating the Transferability of a Synthetic Population Generation Approach for Public Health Applications Emma Von Hoene (George Mason University); Aanya Gupta (Thomas Jefferson High School for Science and Technology); and Hamdi Kavak, Amira Roess, and Taylor Anderson (George Mason University) Program Track: Data Science and Simulation Program Tags: Data Driven, R Abstract AbstractSimulations are valuable in public health research, with synthetic populations enabling realistic policy analysis. However, methods for generating synthetic populations with domain-specific characteristics remain underexplored. To address this, we previously introduced a population synthesis approach that directly integrates health surveys. This study evaluates its transferability across health outcomes, locations, and timeframes through three case studies. The first generates a Virginia population (2021) with COVID-
19 vaccine intention, comparing results to probabilistic and regression-based approaches. The second synthesizes populations with depression (2021) for Virginia, Tennessee, and New Jersey. The third constructs Virginia populations with smoking behaviors for 2021 and 2022. Results demonstrate the method’s transferability for various health applications, with validation confirming its ability to capture accuracy, statistical relationships, and spatial heterogeneity. These findings enhance population synthesis for public health simulations and offer new datasets with small-area estimates for health outcomes, ultimately supporting public health decision-making. pdfExplainability in Digital Twins: Overview and Challenges Meryem Mahmoud (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc-Kinney Moller Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Data Driven Abstract AbstractDigital Twins are increasingly being adopted across industries to support decision-making, optimization, and real-time monitoring. As these systems and, correspondingly, the underlying models of their corresponding Digital Twins, grow in complexity, there is a need to enhance explainability at several points in the Digital Twins. This is especially true for safety-critical systems and applications that require Human-in-the-Loop interactions. Ensuring explainability in both the underlying simulation models and the related decision-support mechanisms is key to trust, adoption, and informed decision-making. While explainability has been extensively explored in the context of machine learning models, its role in simulation-based Digital Twins remains less examined. In this paper, we review the current state of the art on explainability in simulation-based Digital Twins, highlighting key challenges, existing approaches, and open research questions. Our goal is to establish a foundation for future research and development, enabling more transparent, trustworthy, and effective Digital Twins. pdfExploring Data Requirements for Data-Driven Agent-Based Modeling Hui Min Lee (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc Kinney Moller Institute) Program Track: Data Science and Simulation Program Tags: Complex Systems, Data Driven Abstract AbstractExtracting Agent-Based Models (ABMs) from data, also known as Data-Driven Agent-Based Modeling (DDABM), requires a clear understanding of data requirements and their mappings to the corresponding ABM components. DDABM is a relatively new and emerging topic, and as such, there are only highly customized and problem-specific solutions and approaches. In our previous work, we presented a framework for DDABM, identifying the different components of ABMs that can be extracted from data. Building on this, the present study provides a comprehensive analysis of existing DDABM approaches, examining prevailing trends and methodologies, focusing on the mappings between data and ABM components. By synthesizing and comparing different DDABM approaches, we establish explicit mappings that clarify data requirements and their role in enabling DDABM. Our findings enhance the understanding of DDABM and highlight the role of data in automating model extraction, highlighting its potential for advancing data-driven agent-based simulations. pdfExploring Integration of Surrogate Models Through A Case Study on Variable Frequency Drives Dušan Šturek (Karlsruhe Institute of Technology, Danfoss Power Electronics and Drives) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Data Science and Simulation Program Tags: Data Driven, System Dynamics Abstract AbstractHigh-fidelity simulation models of variable frequency drives often incur expensive computation due to high granularity, complex physics and highly stiff components, hindering real-time Digital Twin Industry 4.0 applications. Surrogate models can outperform simulation solvers by orders of magnitude, potentially making real-time virtual drives feasible within practical computational limits. Despite this potential, current surrogate models suffer from limited generalizability and robustness. In this paper, we present an industrial case study exploring the combination of deep learning with surrogate modeling for simulating variable frequency drives, specifically replacing the induction motor high-fidelity component. We investigate the performance of Long-Short Term Memory-based surrogates, examining how their prediction accuracy and training time vary with synthetic datasets of different sizes, and how well the induction motor surrogates generalize across different motor resistances. This initial study aims to establish a foundation for further development, benchmarking and automation of surrogate modeling workflow for simulation enhancement. pdfFirescore: a Framework for Incident Risk Evaluation, Simulation, Coverage Optimization and Relocation Experiments Guido A.G. Legemaate (Fire Department Amsterdam-Amstelland, Safety Region Amsterdam-Amstelland); Joep van den Bogaert (Jheronimus Academy of Data Science,); Rob D. van der Mei (Centrum Wiskunde & Informatica); and Sandjai Bhulai (Vrije Universiteit Amsterdam) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Python Abstract AbstractThis paper introduces fireSCore, an open source framework for incident risk evaluation, simulation, coverage optimization, and relocation experiments. As a digital twin of operational fire department logistics, its visualization frontend provides a live view of current coverage for the most common fire department units. Manually changing a unit status allows for a view into future coverage as it triggers an immediate recalculation of prognosed response times and coverage using the Open Source Routing Machine. The backend provides the controller and model, and implements various algorithms, e.g. a relocation algorithm that optimizes coverage during major incidents. The data broker handles communication with data sources and provides data for the front- and backend. An optional simulator adds an environment in which various scenarios, models and algorithms can be tested and aims to drive current and future organizational developments within the Dutch national fire service. pdfGeneral-Purpose Ranking and Selection for Stochastic Simulation with Streaming Input Data Jaime Gonzalez-Hodar and Eunhye Song (Georgia Institute of Technology) Program Track: Simulation Optimization Program Tags: Data Driven, Metamodeling, Ranking and Selection Abstract AbstractWe study ranking and selection (R&S) where the simulator’s input models are increasingly more precisely estimated from the streaming data obtained from the system. The goal is to decide when to stop updating the model and return the estimated optimum with a probability of good selection (PGS) guarantee. We extend the general-purpose R&S procedure by Lee and Nelson by integrating a metamodel that represents the input uncertainty effect on the simulation output performance measure. The algorithm stops when the estimated PGS is no less than 1−α accounting for both prediction error in the metamodel and input uncertainty. We then propose an alternative procedure that terminates significantly earlier while still providing the same (approximate) PGS guarantee by allowing the performance measures of inferior solutions to be estimated with lower precision than those of good solutions. Both algorithms can accommodate nonparametric input models and/or performance measures other than the means (e.g., quantiles). pdfNested Denoising Diffusion Sampling for Global Optimization Yuhao Wang (Georgia Institute of Technology), Haowei Wang (National University of Singapore), Enlu Zhou (Georgia Institute of Technology), and Szu Hui Ng (National University of Singapore) Program Track: Simulation Optimization Program Tags: Data Driven, Sampling Abstract AbstractWe propose a novel algorithm, Nested Denoising Diffusion Sampling (NDDS), for solving deterministic global optimization problems where the objective function is a black box—unknown, possibly non-differentiable, and expensive to evaluate. NDDS addresses this challenge by leveraging conditional diffusion models to efficiently approximate the evolving solution distribution without incurring the cost of extensive function evaluations. Unlike existing diffusion-based optimization methods that operate in offline settings and rely on manually specified conditioning variables, NDDS systematically generates these conditioning variables through a statistically principled mechanism. In addition, we introduce a data reweighting strategy to address the distribution mismatch between the training data and the target sampling distribution. Numerical experiments demonstrate that NDDS consistently outperforms the Extended Cross-Entropy (CE) method under the same function evaluation budget, particularly in high-dimensional settings. pdfOptimization of Queueing Systems Using Streaming Simulation Robert James Lambert, James Grant, and Rob Shone (Lancaster University) and Roberto Szechtman (Naval Postgraduate School) Program Track: Simulation Optimization Program Tags: Data Driven, Monte Carlo Abstract AbstractWe consider the problem of adaptively determining the optimal number of servers in an M/G/c queueing system in which the unknown arrival rate must be estimated using data that arrive sequentially over a series of observation periods. We propose a stochastic simulation-based approach that uses iteratively updated parameters within a greedy decision-making policy, with the selected number of servers minimising a Monte Carlo estimate of a chosen objective function. Under minimal assumptions, we derive a central limit theorem for the Monte Carlo estimator and derive an asymptotic bound on the probability of incorrect selection of the policy. We also demonstrate the empirical performance of the policy in a finite-time numerical experiment. pdfOptimizing Precast Concrete Production: a Discrete-event Simulation Approach with Simphony Julie Munoz, Mohamad Itani, Mohammad Elahi, Anas Itani, and Yasser Mohamed (University of Alberta) Program Track: Project Management and Construction Program Tags: Data Driven, Validation Abstract AbstractPrecast concrete manufacturers increasingly face throughput bottlenecks as market demand rises and curing-area capacity reaches its limit. This paper develops a validated discrete-event simulation (DES) model of a Canadian precast panel plant using the Simphony platform. Field observations, time studies, and staff interviews supply task durations, resource data, and variability distributions. After verification and validation against production logs, two improvement scenarios are tested: (1) doubling curing beds and (2) halving curing time with steam curing. Scenario A reduces total cycle time by 26 %, while Scenario B achieves a 24 % reduction and lowers curing-bed utilization by 5 %. Both scenarios cut crane waiting and queue lengths, demonstrating that relieving the curing bottleneck drives system-wide gains. The study confirms DES as an effective, low-risk decision-support tool for off-site construction, offering plant managers clear, data-driven guidance for investment planning and lean implementation. pdfOut of the Past: An AI-Enabled Pipeline for Traffic Simulation from Noisy, Multimodal Detector Data and Stakeholder Feedback Rex Chen and Karen Wu (Carnegie Mellon University), John McCartney (Path Master Inc.), and Norman Sadeh and Fei Fang (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tag: Data Driven Abstract AbstractHow can a traffic simulation be designed to faithfully reflect real-world traffic conditions? One crucial step is modeling the volume of traffic demand. But past demand modeling approaches have relied on unrealistic or suboptimal heuristics, and they have failed to adequately account for the effects of noisy and multimodal data on simulation outcomes. In this work, we integrate advances in AI to construct a three-step, end-to-end pipeline for systematically modeling traffic demand from detector data: computer vision for vehicle counting from noisy camera footage, combinatorial optimization for vehicle route generation from multimodal data, and large language models for iterative simulation refinement from natural language feedback. Using a road network from Strongsville, Ohio as a testbed, we show that our pipeline accurately captures the city’s traffic patterns in a granular simulation. Beyond Strongsville, incorporating noise and multimodality makes our framework generalizable to municipalities with different levels of data and infrastructure availability. pdfQuantile-Boosted Stochastic Approximation Best Contributed Theoretical Paper - Finalist Jinyang Jiang (Peking University), Bernd Heidergott (Vrije Universiteit Amsterdam), and Yijie Peng (Peking University) Program Track: Simulation Optimization Program Tag: Data Driven Abstract AbstractStochastic approximation (SA) offers a recursive framework for tracking the quantiles of a parameterized system’s output distribution using observed samples. In this paper, we employ SA-based quantile trackers to approximate the gradient of an objective function and integrate them into a unified SA scheme for finding stationary points. The proposed gradient estimation framework accommodates both finite-difference and score-function methods. Our method allows for dynamically adjusting the number of trackers within a single optimization run. This adaptability enables more efficient and accurate approximation of the true objective gradient. The resulting single time-scale estimator is also applicable to stationary performance measures. Numerical experiments confirm the effectiveness and robustness of the proposed approach. pdfSupply Chain Optimization via Generative Simulation and Iterative Decision Policies Haoyue Bai (Arizona State University); Haoyu Wang (NEC Labs America.); Nanxu Gong, Xinyuan Wang, and Wangyang Ying (Arizona State University); Haifeng Chen (NEC Labs America.); and Yanjie Fu (Arizona State University) Program Track: Data Science and Simulation Program Tags: Data Driven, Python, Supply Chain Abstract AbstractHigh responsiveness and economic efficiency are critical objectives in supply chain transportation, both of which are influenced by strategic decisions on shipping mode. An integrated framework combining an efficient simulator with an intelligent decision-making algorithm can provide an observable, low-risk environment for transportation strategy design. An ideal simulation-decision framework must (1) generalize effectively across various settings, (2) reflect fine-grained transportation dynamics, (3) integrate historical experience with predictive insights, and (4) maintain tight integration between simulation feedback and policy refinement. We propose Sim-to-Dec framework to satisfy these requirements. Specifically, Sim-to-Dec consists of a generative simulation module, which leverages autoregressive modeling to simulate continuous state changes, reducing dependence on handcrafted domain-specific rules and enhancing robustness against data fluctuations; and a history–future dual-aware decision model, refined iteratively through end-to-end optimization with simulator interactions. Extensive experiments conducted on three real-world datasets demonstrate that Sim-to-Dec significantly improves timely delivery rates and profit. pdfWeapon Combat Effectiveness Analytics: Integrating Deep Learning and Big Data from Virtual-constructive Simulations Luis Rabelo, Larry Lowe, Won II Jung, Marwen Elkamel, and Gene Lee (University of Central Florida) Program Track: Military and National Security Applications Program Tags: Complex Systems, Data Driven Abstract AbstractThis paper explores the application of deep learning and big data analytics to assess Weapon Combat Effectiveness (WCE) in dynamic combat scenarios. Traditional WCE models rely on simplified assumptions and limited input variables, limiting their realism. To overcome these challenges, datasets are generated from integrated Virtual-Constructive (VC) simulation frameworks, combining strengths of defense modeling, big data, and artificial intelligence. A case study features two opposing forces: a Blue Force (seven F-16 aircraft) and a Red Force (two surface-to-air missile (SAM) units) and a high-value facility. Raw simulation data is processed to extract Measures of Performance (MOPs) to train a convolutional neural network (CNN), to capture nonlinear relationships and estimate mission success probabilities. Results show the model’s resilience to data noise and its usefulness in generating decision-support tools like probability maps. Early results suggest that deep learning integrated with federated VC simulations can significantly enhance fidelity and flexibility of WCE analytics. pdf
A Method for Fmi and Devs for Co-simulation Ritvik Joshi (Carleton University, Blackberry QNX); James Nutaro (Oak Ridge National Lab); Gabriel Wainer (Carleton University); and Bernard Zeigler and Doohwan Kim (RTSync Corp) Program Track: Modeling Methodology Program Tags: Cyber-Physical Systems, DEVS Abstract AbstractThe need for standardized exchange of dynamic models led to the Functional Mockup Interface (FMI), which facilitates model exchange and co-simulation across multiple tools. Integration of this standard with modeling and simulation formalism enhances interoperability and provides opportunities for collaboration. This research presents an approach for the integration of FMI and Discrete Event System Specification (DEVS). DEVS provides the modularity required for seamlessly integrating the shared model. We proposed a framework for exporting and co-simulating DEVS models as well as for importing and co-simulating continuous-time models using the FMI standard. We present a case study that shows the use of this framework to simulate the steering system of an Unmanned Ground Vehicle (UGV). pdfDEVS Models for Arctic Major Maritime Disasters Hazel Tura Griffith and Gabriel A. Wainer (Carleton University) Program Track: Modeling Methodology Program Tags: C++, DEVS, Rare Events Abstract AbstractModern modelling and simulation techniques allow us to safely test the policies used to mitigate disasters. We show how the DEVS formalism can be used to ease the modelling process by exploiting its modularity. We show how a policymaker’s existing models of any type can be recreated with DEVS so they may be reused in any new models, decreasing the number of new models that need to be made. We recreate a sequential decision model of an arctic major maritime disaster developed by the Canadian government as a DEVS model to demonstrate the method. The case study shows how DEVS allows policymakers to create models for studying emergency policies with greater ease. This work shows a method that can be used by policymakers, including models of emergency scenarios, and how they can benefit from creating equivalent DEVS models, as well as exploiting the beneficial properties of the DEVS formalism. pdfGenerative Statecharts-Driven PDEVS Behavior Modeling Vamsi Krishna Vasa and Hessam S. Sarjoughian (Arizona State University) and Edward J. Yellig (Intel Corporation) Program Track: Modeling Methodology Program Tags: Complex Systems, DEVS Abstract AbstractBehavioral models of component-based dynamical systems are integral to building useful simulations. Toward this goal, approaches enabled by Large Language Models (LLMs) have been proposed and developed to generate grammar-based models for Discrete Event System Specification (DEVS). This paper introduces PDEVS-LLM, an agentic framework to assist in developing Parallel DEVS (PDEVS) models. It proposes using LLMs with statecharts to generate behaviors for parallel atomic models. Enabled with PDEVS concepts, plausible facts from the whole description of a system are extracted. The PDEVS-LLM is equipped with grammars for the PDEVS statecharts and hierarchical coupled model. LLM agents assist modelers in (re-)generating atomic models with conversation histories. Examples are developed to demonstrate the capabilities and limitations of LLMs for generative PDEVS models. pdfIntegrated RTS-RTD Simulation Framework for Semiconductor Manufacturing System Seongho Cho (Ajou University), Donguk Kim (LG Production and Research Institute), and Sangchul Park (Ajou University) Program Track: MASM: Semiconductor Manufacturing Program Tags: DEVS, MOZART LSE Abstract AbstractThe complexity of modern semiconductor fabrication (FAB) systems makes it difficult to implement integrated simulation systems that combine production and logistics simulators. As a result, these simulators have traditionally been developed independently. However, in actual FAB operations, information exchange between Real-Time Schedulers (RTS) and Real-Time Dispatchers (RTD) coordinates production activities. To address this issue, we propose a coupled RTS–RTD simulation framework that integrates production and logistics simulators into a unified environment. In addition, we introduce a dynamic decision-making rule that enables flexible responses when logistical constraints prevent execution of the original production schedule. Simulation experiments were conducted using the SMT2020 and SMAT2022 datasets. The results show that selectively following RTD decisions, instead of strictly adhering to RTS-generated schedules, can significantly improve production efficiency in FAB operations. pdfTemporal Diffusion Models From Parallel DEVS Models: A Generative-AI Approach for Semiconductor Fabrication Manufacturing Systems Vamsi Krishna Pendyala and Hessam S. Sarjoughian (Arizona State University) and Edward J. Yellig (Intel Corporation) Program Track: Simulation and Artificial Intelligence Program Tag: DEVS Abstract AbstractGenerative-AI models offer powerful capabilities for learning complex dynamics and generating high-fidelity synthetic data. In this work, we propose Conditional Temporal Diffusion (CTD) models for generating wafer fabrication time-series trajectories conditioned on static factory configurations. The model is trained using data from a Parallel Discrete Event System Specification (PDEVS)-based MiniFab benchmark model, which simulates different steps of a semiconductor manufacturing process and captures the wafer processing dynamics (e.g., throughput \& turnaround time). These simulations incorporate multiscale, realistic behaviors such as preventive maintenance and wafer dispatching under both uniform and sinusoidal generation patterns. CTD models are conditioned on static covariates, including wafer composition, lot sizes, repair type, and wafer generator mode of the factory. Experimental evaluations demonstrate that the synthetic outputs achieve high fidelity with average errors below 15\% while significantly reducing data generation time. This highlights CTD’s effectiveness as a scalable and efficient surrogate for complex manufacturing simulations. pdfTowards a DEVS-Based Simulation Engine for Digital Twin Applications Arnis Lektauers (Riga Technical University) Program Track: Simulation as Digital Twin Program Tags: DEVS, Open Source Abstract AbstractDigital twins (DT) are increasingly being adopted to improve system monitoring, prediction, and decision making in various domains. Although simulation plays a central role in many DT implementations, a lack of formal modeling foundations often leads to ad hoc and non-scalable solutions. This paper proposes a simulation engine for DT applications based on the Discrete Event System Specification (DEVS) formalism. DEVS provides a robust, modular, and hierarchical modeling framework suitable for modeling the structure and behavior of complex cyber-physical systems. A key contribution is the integration of the Parallel DEVS for Multicomponent Systems (multiPDEVS) formalism with X-Machines to support state and memory separation for simulation models with the goal of improving model scalability and reusability, as well as providing a basis for integration with DTs. The paper presents the architectural design of the engine, highlights its main functional components, and demonstrates its capabilities using a preliminary use case. pdf
A Heuristic-based Rolling Horizon Method for Dynamic and Stochastic Unrelated Parallel Machine Scheduling Shufang Xie, Tao Zhang, and Oliver Rose (Universität der Bundeswehr München) Program Track: Manufacturing and Industry 4.0 Program Tags: AnyLogic, Distributed Abstract AbstractIn stochastic manufacturing environments, disruptions such as machine breakdowns, variable processing times, and unexpected delays make static scheduling approaches ineffective. To address this, we propose a heuristic-based rolling horizon scheduling method for unrelated parallel machines. The rolling horizon framework addresses system stochasticity by enabling dynamic adaptation through frequent rescheduling of both existing jobs and those arriving within a rolling lookahead window. This method decomposes the global scheduling problem into smaller, more manageable subproblems. Each subproblem is solved using a heuristic approach based on a suitability score that incorporates key factors such as job properties, machine characteristics, and job-machine interactions. Simulation-based experiments show that the proposed method outperforms traditional dispatching rules in dynamic and stochastic manufacturing environments with a fixed number of jobs, achieving shorter makespans and cycle times, reduced WIP levels, and lower machine utilization. pdfConStrobe - Construction Operations Simulation for Time and Resource Based Evaluations Joseph Louis (Oregon State University) Program Track: Project Management and Construction Program Tag: Distributed Abstract AbstractThis paper introduces ConStrobe – Construction Operations Simulation for Time and Resource Based Evaluations – which is a simulation software that builds upon knowledge in construction field operations simulation by providing the capabilities of running High Level Architecture (HLA)-compliant distributed simulations and being amenable to automation from external programs written in the Python language for two-way communication with external data sources. These features are provided to overcome some of the major limitations of existing construction operations simulation tools that have hindered their widespread adoption by industry. The framework of this software is explained along with a sample demonstration case to provide users with an overview of its capabilities and understanding of its working. It is anticipated that the novel capabilities of ConStrobe can reduce the time and effort required to create simulations to enable process analysis for decision-making under uncertainty for complex operations in the construction and built operations domain. pdfLeveraging International Collaboration for Interactive Lunar Simulations: An Educational Experience From See 2025 Kaique Govani, Andrea Lucia Braga, José Lucas Fogaça Aguiar, Giulia Oliveira, Andressa Braga, Rafael Henrique Ramos, Fabricio Torquato Leite, and Patrick Augusto Pinheiro Silva (FACENS) Program Track: Simulation in Space Program Tags: Conceptual Modeling, Distributed, Java Abstract AbstractThis paper presents an educational experience from the Simulation Exploration Experience (SEE) 2025, focusing on leveraging international collaboration to develop interactive lunar simulations. Specifically, the FACENS team created two interoperable simulation federates, a Lunar Cable Car system and an Astronaut system, using Java, Blender and the SEE Starter Kit Framework (SKF). Putting emphasis on the educational and collaborative aspects of SEE, our primary objectives included developing robust real-time interactions with international teams, improving simulation visuals, and improving astronaut behavior and logic using optimized path‑finding algorithms. Seamless interoperability was demonstrated with federates developed by Brunel University and Florida Polytechnic University. Our experiences and lessons learned provide valuable insights for future teams engaged in distributed simulation development and international collaborative projects in the space exploration domain. pdfOptimizing Event Timestamp Processing in Time Warp Gaurav Shinde (STERIS, Inc); Sounak Gupta (Oracle, Inc); and Philip A. Wilsey (University of Cincinnati) Program Track: Modeling Methodology Program Tags: Distributed, Parallel Abstract Abstractwarped2 is a general purpose discrete event simulation kernel that contains a robust event time comparison mechanism to support a broad range of modeling domains. The warped2 kernel can be configured for sequential, parallel, or distributed execution. The parallel or distributed versions implement the Time Warp mechanism (with its rollback and relaxed causality) such that a total order on events can be maintained. To maintain a total order, warped2 has an event ordering mechanism that contains up to 10 comparison operations. While not all comparisons require evaluation of all 10 relations, the overall cost of time comparisons in a warped2 simulation can still consume approximately 15-20% of the total runtime. This work examines the runtime costs of time comparisons in a parallel configuration of the warped2 simulation kernel. Optimizations to the time comparison mechanism are explored and the performance impacts of each are reported. pdfUnfolding Diffusive and Refinement Phases Of Heterogeneous Performance-Aware Re-Partitioning for Distributed Traffic Simulation Anibal Siguenza-Torres (Technical University of Munich); Alexander Wieder, Stefano Bortoli, and Margherita Grossi (Huawei Munich Research Center); Wentong Cai (Nanyang Technological University); and Alois Knoll (Technical University of Munich) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: Distributed Abstract AbstractThis work presents substantial improvements to Enhance, a recent approach for graph
partitioning in large-scale distributed microscopic traffic simulations, particularly in challenging
load-balancing scenarios within heterogeneous computing environments. With a thorough analysis of the diffusive and refinement phases of the Enhance algorithm, we identified orthogonal opportunities for optimizations that markedly improved the quality of the generated partitionings. We validated these improvements using synthetic scenarios, achieving up to a 46.5\% reduction in estimated runtime compared to the original algorithm and providing sound reasoning and intuitions to explain the nature and magnitude of the improvements. Finally, we show experimentally that the performance gains observed in the synthetic scenario partially translate into performance gains in the real system. pdfUsing the Tool Command Language for a Flight Simulation Flight Dynamics Model Frank Morlang (Private Person) and Steffen Strassburger (Ilmenau University of Technology) Program Track: Aviation Modeling and Analysis Program Tags: Distributed, System Dynamics Abstract AbstractThis paper introduces a methodology for simulating flight dynamics utilizing the Tool Command Language (Tcl). Tcl, created by John Ousterhout, was conceived as an embeddable scripting language for an experimental Computer Aided Design (CAD) system. Tcl, a mature and maturing language recognized for its simplicity, versatility, and extensibility, is a compelling contender for the integration of flight dynamics functionalities. The work presents an extension method utilizing Tcl's adaptability for a novel type of flight simulation programming. Initial test findings demonstrate performance appropriate for the creation of human-in-the-loop real-time flight simulations. The possibility for efficient and precise modeling of future complicated distributed simulation elements is discussed, and recommendations regarding subsequent development priorities are drawn. pdf
Automating Traffic Microsimulation from Synchro UTDF to SOMO Xiangyong Luo (ORNL); Yiran Zhang (University of Washington); Guanhao Xu, Wan Li, and Chieh Ross Wang (Oak Ridge National Laboratory); and Xuesong Simon Zhou (Arizona State University) Program Track: Reliability Modeling and Simulation Program Tags: DOE, Python Abstract AbstractModern transportation research relies on seamlessly integrating traffic signal data with robust network representation and simulation tools. This study presents utdf2gmns, an open-source Python tool that automates conversion of the Universal Traffic Data Format, including network representation, signalized intersections, and turning volumes into the General Modeling Network Specification (GMNS) Standard. The resulting GMNS-compliant network can be converted for microsimulation in SUMO. By automatically extracting intersection control parameters and aligning them with GMNS conventions, utdf2gmns minimizes manual preprocessing and data loss. utdf2gmns also integrates with the Sigma-X engine to extract and visualize key traffic control metrics, such as phasing diagrams, turning volumes, volume-to-capacity ratios, and control delays. This streamlined workflow enables efficient scenario testing, accurate model building, and consistent data management. Validated through case studies, utdf2gmns reliably models complex urban corridors, promoting reproducibility and standardization. Documentation is available on GitHub and PyPI, supporting easy integration and community engagement. pdfCalibrating Driver Aggression Parameters in Microscopic Simulation using Safety-Surrogate Measures David Hong and Montasir Abbas (Virginia Tech) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: DOE, JMP Abstract AbstractThis research aimed to develop a methodology and a framework to calibrate microscopic simulation models driving behaviors to reproduce safety conflicts observed in real-world environments. The Intelligent Driver Model (IDM) was selected as the car-following algorithm to be utilized in the External Driver Model (EDM) Application Programming Interface (API) in VISSIM to better represent real-world driving behavior. The calibration method starts with an experiment design in the statistical software JMP Pro 16, that provided 84 simulation runs, each with a distinct combination of the 11 EDM input variables. After 84 runs with such variables, the traffic trajectory was analyzed by the FHWA’s Surrogate Safety Assessment Model (SSAM) to generate crossing, rear-end, and lane change conflict counts. It is concluded that the calibration method proposed can closely match the conflict counts translated from real-world conditions. pdfExploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfHybrid Simulation-based Algorithm Tuning for Production Speed Management System as a Stand-alone Online Digital Twin Ahmad Attar, Martino Luis, and Tzu-Chun Lin (University of Exeter); Shuya Zhong (University of Bath); and Voicu Ion Sucala and Abdulaziz Alageel (University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: DOE, Siemens Tecnomatix Plant Simulation Abstract AbstractOne of the primary in-built components of smart, continuous manufacturing lines is the production speed management system (PSMS). In addition to being overly cautious, the decisions made in these systems may center on making local adjustments to the manufacturing process, indicating a major drawback of such systems that prevents them from acting as proper digital twins. This study delves into hybridizing the continuous and discrete event simulation, DOE, and V-graph methods to redefine PSMS’s internal decision algorithms and procedures, giving it an aerial perspective of the line and turning it into a stand-alone online digital twin with decisions at a system level. The proposed approach is applied to a practical case from the food and beverage industry to validate its effectiveness. Numerical results demonstrated an intelligent, dynamic balancing of the production line, a substantial increment in productivity, and up to 37.7% better resiliency against new failure and repair patterns. pdfPySIRTEM: An Efficient Modular Simulation Platform For The Analysis Of Pandemic Scenarios Preetom Kumar Biswas, Giulia Pedrielli, and K. Selçuk Candan (Arizona State University) Program Track: Modeling Methodology Program Tags: DOE, Monte Carlo, Python Abstract AbstractConventional population-based ODE models struggle against increased level of resolution since incorporating many states exponentially increases computational costs, and demands robust calibration for numerous hyperparameters. PySIRTEM is a spatiotemporal SEIR-based epidemic simulation platform that provides high resolution analysis of viral disease progression and mitigation. Based on the authors-developed Matlab© simulator SIRTEM, PySIRTEM’s modular design reflects key health processes, including infection, testing, immunity, and hospitalization, enabling flexible manipulation of transition rates. Unlike SIRTEM, PySIRTEM uses a Sequential Monte Carlo (SMC) particle filter to dynamically learn epidemiological parameters using historical COVID-19 data from several U.S. states. The improved accuracy (by orders of magnitude) make PySIRTEM ideal for informed decision-making by detecting outbreaks and fluctuations. We further demonstrate PySIRTEM ’s usability performing a factorial analysis to assess the impact of different hyperparameter configurations on the predicted epidemic dynamics. Finally, we analyze
containment scenarios with varying trends, showcasing PySIRTEM ’s adaptability and effectiveness. pdfThe Derivative-Free Fully-Corrective Frank-Wolfe Algorithm for Optimizing Functionals Over Probability Spaces Best Contributed Theoretical Paper - Finalist Di Yu (Purdue University), Shane G. Henderson (Cornell University), and Raghu Pasupathy (Purdue University) Program Track: Simulation Optimization Program Tag: DOE Abstract AbstractThe challenge of optimizing a smooth convex functional over probability spaces is highly relevant in experimental design, emergency response, variations of the problem of moments, etc. A viable and provably efficient solver is the fully-corrective Frank-Wolfe (FCFW) algorithm. We propose an FCFW recursion that rigorously handles the zero-order setting, where the derivative of the objective is known to exist, but only the objective is observable. Central to our proposal is an estimator for the objective’s influence function, which gives, roughly speaking, the directional derivative of the objective function in the direction of point mass probability distributions, constructed via a combination of Monte Carlo, and a projection onto the orthonormal expansion of an L2 function on a compact set. A bias-variance analysis of the influence function estimator guides step size and Monte Carlo sample size choice, and helps characterize the recursive rate behavior on smooth non-convex problems. pdf
A Digital Twin of Water Network for Exploring Sustainable Water Management Strategies Souvik Barat, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd); Gurudas Nulkar and Soomrit Chattopadhyay (Gokhale Institute of Politics and Economics, Pune); and Ashwini Keskar (Pune Knowledge Cluster, Pune) Program Track: Simulation as Digital Twin Program Tags: Emergent Behavior, Python, System Dynamics Abstract AbstractEfficient water management is an increasingly critical challenge for policymakers tasked with ensuring reliable water availability for agriculture, industry and domestic use while mitigating flood risks during monsoon seasons. This challenge is especially pronounced in regions where water networks rely primarily on rain-fed systems. Managing such water ecosystem is complex due to inherent constraints in water source, storage and flow, environmental uncertainties such as variable rainfall and evaporation, and increasing need for urbanization, industrial expansion and equity on interstate water sharing. In this study, we present a stock-and-flow-based simulatable digital twin designed to accurately represent the dynamics of a raindependent water network comprising dams, rivers and associated environmental and usage factors. The model supports scenario-based simulation and the evaluation of mitigation policies to enable evidencebased decision-making. We demonstrate the usefulness of our approach using a real water body network from western India that covers more than 300 km heterogeneous landscape. pdfAgent-based Social Simulation of Spatiotemporal Process-triggered Graph Dynamical Systems Zakaria Mehrab, S.S. Ravi, Henning Mortveit, Srini Venkatramanan, Samarth Swarup, Bryan Lewis, David Leblang, and Madhav Marathe (University of Virginia) Program Track: Agent-based Simulation Program Tags: Complex Systems, Emergent Behavior, System Dynamics Abstract AbstractGraph dynamical systems (GDSs) are widely used to model and simulate realistic multi-agent social dynamics, including societal unrest. This involves representing the multiagent system as a network and assigning functions to each vertex describing how they update their states based on the neighborhood states. However, in many contexts, social dynamics are triggered by external processes, which can affect the state transitions of agents. The classical GDS formalism does not incorporate such processes. We introduce the STP-GDS framework, that allows a GDS to be triggered by spatiotemporal background processes. We present a rigorous definition of the framework followed by formal analysis to estimate the size of the active neighborhood under two types of process distribution. The real-life applicability of the framework is further highlighted by an additional case study involving evacuation due to natural events, where we analyze collective agent behaviors under heterogeneous environmental and spatial settings. pdfExtending Social Force Model for the Design and Development of Crowd Control and Evacuation Strategies using Hybrid Simulation Best Contributed Applied Paper - Finalist Aaron LeGrand and Seunghan Lee (Mississippi State University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Emergent Behavior, Rare Events Abstract AbstractEfficient crowd control in public spaces is critical for mitigating threats and ensuring public safety, especially in scenarios where live testing environments are limited. It is important to study crowd behavior following disruptions and strategically allocate law enforcement resources to minimize the impact on civilian populations to improve security systems and public safety. This paper proposes an extended social force model to simulate crowd evacuation behaviors in response to security threats, incorporating the influence and coordination of law enforcement personnel. This research examines evacuation strategies that balance public safety and operational efficiency by extending social force models to account for dynamic law enforcement interventions. The proposed model is validated through physics-based simulations, offering insights into effective and scalable solutions for crowd control at public events. The proposed hybrid simulation model explores the utility of integrating agent-based and physics-based approaches to enhance community resilience through improved planning and resource allocation. pdfIdentification of Spatial Energy Demand Shift Flexibilities of EV Charging on Regional Level Through Agent-Based Simulation Paul Benz and Marco Pruckner (University of Würzburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Emergent Behavior Abstract AbstractOpen access to electric vehicle charging session data is limited to a small selection provided by operators of mostly public or workplace chargers. This restriction poses a hurdle in research on regional energy demand shift flexibilities enabled by smart charging, since usage characteristics between different charging options remain hidden. In this paper, we present an agent-based simulation model with parameterizable availability and usage preferences of public and private charging infrastructure to access insights of charging behavior otherwise only visible through proprietary data. Thus, we enable utility operators to estimate spatial charging energy distribution and support the integration of renewable energy by showing potentials for smart charging. In a first application, we point out how increased access and use of private charging facilities can lead to additional energy demand in rural municipalities, which, in turn, leads to a lower grid load in urban centers. pdf
A Reinforcement Learning-Based Discrete Event Simulation Approach For Streamlining Job-Shop Production Line Under Uncertainty Jia-Min Chen, Bimal Nepal, and Amarnath Banerjee (Texas A&M University) Program Track: Manufacturing and Industry 4.0 Program Tags: Data Driven, FlexSim Abstract AbstractStreamlining the order release strategy for a job-shop production system under uncertainty is a complex problem. The system is likely to have a number of stochastic parameters contributing to the problem complexity. These factors make it challenging to develop optimal job-shop schedules. This paper presents a Reinforcement Learning-based discrete-event simulation approach that streamlines the policy for releasing orders in a job-shop production line under uncertainty. A digital twin (DT) was developed to simulate the job-shop production line, which facilitated the collection of process and equipment data. A reinforcement learning algorithm is connected to the DT environment and trains with the previously collected data. Once the training is complete, its solution is evaluated in the DT using experimental runs. The method is compared with a few popular heuristic-based rules. The experimental results show that the proposed method is effective in streamlining the order release in a job-shop production system with uncertainty. pdfSimulation-based Dynamic Job Shop Scheduling Approach to Minimize the Impact of Resource Uncertainties Md Abubakar Siddique, Selim Molla, Amit Joe Lopes, and Md Fashiar Rahman (The University of Texas at El Paso) Program Track: Manufacturing and Industry 4.0 Program Tags: Complex Systems, FlexSim Abstract AbstractThe complexity of job shops is characterized by variable product routing, machine reliability, and operator learning that necessitates intelligent assignment strategies to optimize performance. Traditional models often rely on first-available machine selection, neglecting learning curves and processing time variability. To overcome these limitations, this paper introduces the Data-Driven Job Shop Scheduling (DDJSS) framework, which dynamically selects machines based on the status of resources at the current time steps. To evaluate the effectiveness of the proposed frameworks, we developed two scenarios using FlexSim to perform a thorough analysis. The results demonstrated significant improvements in key performance indicators, including reduced waiting time, lower queue length, and higher throughput. The output is increased by over 144% and 348%, for some exemplary jobs in the case studies mentioned in this paper. This study highlights the value of integrating learning behavior and data-driven assignments for improving decision-making in flexible job shop environments. pdf
AURORA: Enhancing Synthetic Population Realism Through RAG and Salience-Aware Opinion Modeling rebecca marigliano and Kathleen Carley (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tags: Data Driven, Input Modeling, Python Abstract AbstractSimulating realistic populations for strategic influence and social-cyber modeling requires agents that are demographically grounded, emotionally expressive, and contextually coherent. Existing agent-based models often fail to capture the psychological and ideological diversity found in real-world populations. This paper introduces AURORA, a Retrieval-Augmented Generation (RAG)-enhanced framework that leverages large language models (LLMs), semantic vector search, and salience-aware topic modeling to construct synthetic communities and personas. We compare two opinion modeling strategies and evaluate three LLMs—gemini-2.0-flash, deepseek-chat, and gpt-4o-mini—in generating emotionally and ideologically varied agents. Results show that community-guided strategies improve meso-level opinion realism, and LLM selection significantly affects persona traits and emotions. These findings demonstrate that principled LLM integration and salience-aware modeling can enhance the realism and strategic utility of synthetic populations for simulating narrative diffusion, belief change, and social response in complex information environments. pdfAn Agent-Based Framework for Sustainable Perishable Food Supply Chains Maram Shqair (Auburn University); Karam Sweis, Haya Dawkassab, and Safwan Altarazi (German Jordanian University); and Konstantinos Mykoniatis (Auburn University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Input Modeling, Supply Chain Abstract AbstractThis study presents an agent-based modeling framework for enhancing the efficiency and sustainability of perishable food supply chains. The framework integrates forward logistics redesign, reverse logistics, and waste valorization into a spatially explicit simulation environment. It is applied to the tomato supply chain in Jordan, restructuring the centralized market configuration into a decentralized closed loop system with collection points, regional hubs, and biogas units. The model simulates transportation flows, agent interactions, and waste return through retailer backhauls. Simulation results show a 31.1 percent reduction in annual transportation distance and cost, and a 35.9 percent decrease in transportation cost per ton. The proposed approach supports cost-effective logistics and a more equitable distribution of transport burden, particularly by shifting a greater share to retailers. Its modular structure, combined with reliance on synthetic data and scenario flexibility, makes it suitable for evaluating strategies in fragmented, resource-constrained supply chains. pdfAn Empirical Study of Generative Models as Input Models for Simulation Zhou Miao (The Hong Kong Polytechnic University) and Zhiyuan Huang and Zhaolin Hu (Tongji University) Program Track: Simulation and Artificial Intelligence Program Tags: Input Modeling, Python Abstract AbstractInput modeling is pivotal for generating realistic data that mirrors real-world variables for simulation, yet traditional parametric methods often fail to capture complex dependencies. This study investigates the efficacy of modern generative models—such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Normalizing Flows, and Diffusion Models—in addressing these challenges. Through systematic experiments on synthetic and real-world datasets, we evaluate their performance using metrics like Wasserstein distance and quantile loss. Our findings reveal that VAEs and Denoising Diffusion Probabilistic Models (DDPMs) consistently outperform other models, particularly in capturing nonlinear relationships, while GAN-based approaches exhibit instability. These results provide practical insights for practitioners, highlighting models that deliver reliable performance without extensive customization, and outline promising directions for future research in simulation input modeling. pdfDynamic Calibration of Digital Twin via Stochastic Simulation: A Wind Energy Case Study Best Contributed Theoretical Paper - Finalist, Best Contributed Applied Paper - Finalist Yongseok Jeon and Sara Shashaani (North Carolina State University), Eunshin Byon (University of Michigan), and Pranav Jain (North Carolina State University) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Metamodeling Abstract AbstractThis study presents an approach to dynamically calibrate a digital twin to support decision-making in systems operating under uncertainty. The framework integrates uncertainty by turning a physics-based model into a stochastic simulation, where independent variables that represent environmental conditions may be nonstationary whereas target variables are conditionally stationary. Calibration itself is formulated as a simulation optimization problem that we solve using a root-finding strategy. As a case study, we apply the framework to the prediction of short-term power deficit, known as the wake effect, in wind farms using real-world data and demonstrate the robustness of the proposed framework. Besides advancing the digital twin research, the presented methodology is expected to impact wind farm wake steering strategy by enabling accurate short-term wake effect prediction. pdfQuantifying Uncertainty from Machine Learning Surrogate Models Embedded in Simulation Models Mohammadmahdi Ghasemloo and David J. Eckman (Texas A&M University) and Yaxian Li (Intuit) Program Tag: Input Modeling Abstract AbstractModern simulation models increasingly feature complex logic intended to represent how tactical decisions are made with advanced decision support systems (DSSs). For a variety of reasons, e.g., concerns about computational cost, data privacy, and latency, users might choose to replace DSSs with approximate logic within the simulation model. This paper investigates the impacts of replacing DSSs with machine learning surrogate models on the estimation of system performance metrics. We distinguish this so-called surrogate uncertainty from conventional input uncertainty and develop approaches for quantifying the error introduced by the use of surrogate models. Specifically, we explore bootstrapping and Bayesian model averaging methods for obtaining quantile-based confidence intervals for expected performance measures and propose using regression-tree importance scores to apportion the overall uncertainty across input and surrogate models. We illustrate our approach through a contact-center simulation experiment. pdfReal-time Image Processing and Emulation for Intelligent Warehouse Automation Sumant Joshi, Saurabh Lulekar, Tarciana Almeida, Abhineet Mittal, and Ganesh Nanaware (Amazon) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Matlab Abstract AbstractMaterial flow simulation and emulation are essential tools used in warehouse automation design and commissioning, to create a digital twin and validate equipment control logic. The current emulation platforms lack an internal computer vision (CV) toolkit which poses a challenge for emulating vision-based control system behavior which requires real-time image processing capability. This paper addresses this gap by proposing an innovative framework that utilizes a bridge between Emulate3D and MATLAB to establish real-time bidirectional communication to emulate vision-based control systems. The integration enables transfer of visual data from Emulate3D to MATLAB, which provides CV toolkit to analyze vision data and communicate controls decisions back to Emulate3D. We evaluated this approach to develop a small-footprint package singulator (SFPS) and the results show that SFPS achieved target throughput with 45% improvement in singulation accuracy over conventional singulators with 64% less footprint and eliminating the need for gapper equipment required with conventional singulators. pdf
Assessing the NATO Clinical Timelines in Medical Evacuation: A Simulation with Open-Access Data Kai Meisner (Bundeswehr Medical Academy, University of the Bundeswehr Munich); Falk Stefan Pappert and Tobias Uhlig (University of the Bundeswehr Munich); Mehdi Benhassine (Royal Military Academy); and Oliver Rose (University of the Bundeswehr Munich) Program Track: Military and National Security Applications Program Tag: Java Abstract AbstractNATO allies are preparing for Large-Scale Combat Operations (LSCOs) against peer or near-peer adversaries. Although a significant increase in casualties with life-threatening injuries is expected, western military personnel lack experience with the medical requirements of LSCOs. We propose the use of simulation to conduct necessary research, estimate the resources required, and adapt the doctrine. We therefore present a scenario for assessing NATO’s clinical timelines based on open-access data, showing that a shortage of surgical capacity is likely to occur. pdfDevelopment of a Library of Modular Components to Accelerate Material Flow Simulation in the Aviation Industry Hauke Stolz, Philipp Braun, and Hendrik Rose (Technical University of Hamburg) and Helge Fromm and Sascha Stebner (Airbus Group) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Java Abstract AbstractAircraft manufacturing presents significant challenges for logistics departments due to the complexity of processes and technology, as well as the high variety of parts that must be handled. To support the development and optimization of these complex logistics processes in the aviation industry, simulation is commonly employed. However, existing simulation models are typically tailored to specific use cases. Reusing or adapting these models for other aircraft-specific applications often requires substantial implementation and
validation efforts. As a result, there is a need for flexible and easily adaptable simulation models. This work aims to address this challenge by developing a modular library for logistics processes in aircraft manufacturing. The outcome of this work highlights the simplifications introduced by the developed library and its application in a real aviation warehouse. pdfGoal-oriented Generation of Simulation Experiments Anja Wolpers, Pia Wilsdorf, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Conceptual Modeling, Java, Validation Abstract AbstractAutomatically generating and executing simulation experiments promises to make running simulation
studies more efficient, less error-prone, and easier to document and replicate. However, during experiment
generation, background knowledge is required regarding which experiments using which inputs and outputs
are useful to the modeler. Therefore, we conducted an interview study to identify what types of experiments
modelers perform during simulation studies. From the interview results, we defined four general goals
for simulation experiments: exploration, confirmation, answering the research question, and presentation.
Based on the goals, we outline and demonstrate an approach for automatically generating experiments by
utilizing an explicit and thoroughly detailed conceptual model. pdfLeveraging International Collaboration for Interactive Lunar Simulations: An Educational Experience From See 2025 Kaique Govani, Andrea Lucia Braga, José Lucas Fogaça Aguiar, Giulia Oliveira, Andressa Braga, Rafael Henrique Ramos, Fabricio Torquato Leite, and Patrick Augusto Pinheiro Silva (FACENS) Program Track: Simulation in Space Program Tags: Conceptual Modeling, Distributed, Java Abstract AbstractThis paper presents an educational experience from the Simulation Exploration Experience (SEE) 2025, focusing on leveraging international collaboration to develop interactive lunar simulations. Specifically, the FACENS team created two interoperable simulation federates, a Lunar Cable Car system and an Astronaut system, using Java, Blender and the SEE Starter Kit Framework (SKF). Putting emphasis on the educational and collaborative aspects of SEE, our primary objectives included developing robust real-time interactions with international teams, improving simulation visuals, and improving astronaut behavior and logic using optimized path‑finding algorithms. Seamless interoperability was demonstrated with federates developed by Brunel University and Florida Polytechnic University. Our experiences and lessons learned provide valuable insights for future teams engaged in distributed simulation development and international collaborative projects in the space exploration domain. pdfSimulation of a Semiconductor Manufacturing Research and Development Cleanroom Baptiste Loriferne (CEA LETI, Mines Saint-Etienne); Gaëlle Berthoux (CEA LETI); Valeria Borodin (IMT Atlantique); Vincent Fischer (CEA LETI); and Agnès Roussy (Mines Saint-Etienne) Program Track: MASM: Semiconductor Manufacturing Program Tag: Java Abstract AbstractThis paper focuses on a Research and Development (R&D) semiconductor manufacturing system. By virtue of their vocation, R&D facilities tolerate much more variability in processes and outcomes than industrial-scale ones. In such environments, operating under conditions characterized by high uncertainty and occurrences of (un)knowns corresponds to normal operating conditions rather than abnormal ones. This paper characterizes the key entities and operational aspects of a semiconductor R&D cleanroom and introduces a discrete-event simulation model that captures these elements. The simulation model is grounded in empirical data and reflects real-life operations management practices observed in actual R&D cleanroom settings. Preliminary computational results based on real-life instances are presented, and future research directions are outlined to support resilient decision-making in environments where high levels of uncertainty are part of normal operating conditions. pdf
Calibrating Driver Aggression Parameters in Microscopic Simulation using Safety-Surrogate Measures David Hong and Montasir Abbas (Virginia Tech) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: DOE, JMP Abstract AbstractThis research aimed to develop a methodology and a framework to calibrate microscopic simulation models driving behaviors to reproduce safety conflicts observed in real-world environments. The Intelligent Driver Model (IDM) was selected as the car-following algorithm to be utilized in the External Driver Model (EDM) Application Programming Interface (API) in VISSIM to better represent real-world driving behavior. The calibration method starts with an experiment design in the statistical software JMP Pro 16, that provided 84 simulation runs, each with a distinct combination of the 11 EDM input variables. After 84 runs with such variables, the traffic trajectory was analyzed by the FHWA’s Surrogate Safety Assessment Model (SSAM) to generate crossing, rear-end, and lane change conflict counts. It is concluded that the calibration method proposed can closely match the conflict counts translated from real-world conditions. pdf
Real-time Image Processing and Emulation for Intelligent Warehouse Automation Sumant Joshi, Saurabh Lulekar, Tarciana Almeida, Abhineet Mittal, and Ganesh Nanaware (Amazon) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Matlab Abstract AbstractMaterial flow simulation and emulation are essential tools used in warehouse automation design and commissioning, to create a digital twin and validate equipment control logic. The current emulation platforms lack an internal computer vision (CV) toolkit which poses a challenge for emulating vision-based control system behavior which requires real-time image processing capability. This paper addresses this gap by proposing an innovative framework that utilizes a bridge between Emulate3D and MATLAB to establish real-time bidirectional communication to emulate vision-based control systems. The integration enables transfer of visual data from Emulate3D to MATLAB, which provides CV toolkit to analyze vision data and communicate controls decisions back to Emulate3D. We evaluated this approach to develop a small-footprint package singulator (SFPS) and the results show that SFPS achieved target throughput with 45% improvement in singulation accuracy over conventional singulators with 64% less footprint and eliminating the need for gapper equipment required with conventional singulators. pdf
ASTROMoRF: Adaptive Sampling Trust-Region Optimization with Dimensionality Reduction Benjamin Wilson Rees, Christine S.M. Currie, and Vuong Phan (University of Southampton) Program Track: Simulation Optimization Program Tags: Metamodeling, Python Abstract AbstractHigh dimensional simulation optimization problems have become prevalent in recent years. In practice, the objective function is typically influenced by a lower dimensional combination of the original decision variables, and implementing dimensionality reduction can improve the efficiency of the optimization algorithm. In this paper, we introduce a novel algorithm ASTROMoRF that combines adaptive sampling with dimensionality reduction, using an iterative trust-region approach. Within a trust-region algorithm a series of surrogates or metamodels is built to estimate the objective function. Using a lower dimensional subspace reduces the number of design points needed for building a surrogate within each trust-region and consequently the number of simulation replications. We explain the basis for the algorithm within the paper and compare its finite-time performance with other state-of-the-art solvers. pdfData-driven Digital Twin for the Predictive Maintenance of Business Processes Paolo Bocciarelli and Andrea D'Ambrogio (University of Rome Tor Vergata) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Metamodeling, Process Mining Abstract AbstractThis paper presents a data-driven framework for the predictive maintenance of Business Processes based on the Digital Twin paradigm. The proposed approach integrates process mining techniques and a low-code develop approach to build reliability-aware simulation models from systems logs. These models are used to automatically generate executable DTs capable of predicting resource failures and estimating the Remaining Useful Life (RUL) of system components. The predictions are then exploited to trigger preventive actions or automated reconfigurations. The framework is implemented using the PyBPMN/eBPMN framework and evaluated on a manufacturing case study. Results show that the DT enables timely interventions, minimizes system downtimes, and ensures process continuity. pdfDialectic Models for Documenting and Conducting Simulation Studies: Exploring Feasibility Steffen Zschaler (King's College London), Pia Wilsdorf (University of Rostock), Thomas Godfrey (Aerogility Ltd), and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Metamodeling, Validation Abstract AbstractValidation and documentation of rationale are central to simulation studies. Most current approaches focus only on individual simulation artifacts---most typically simulation models---and their validity rather than their contribution to the overall simulation study. Approaches that aim to validate simulation studies as a whole either impose structured processes with the implicit assumption that this will ensure validity, or they rely on capturing provenance and rationale, most commonly in natural language, following accepted documentation guidelines. Inspired by dialectic approaches for developing mathematical proofs, we explore the feasibility of capturing validity and rationale information as a study unfolds through agent dialogs that also generate the overall simulation-study argument. We introduce a formal framework, an initial catalog of possible interactions, and a proof-of-concept tool to capture such information about a simulation study. We illustrate the ideas in the context of a cell biological simulation study. pdfDynamic Calibration of Digital Twin via Stochastic Simulation: A Wind Energy Case Study Best Contributed Theoretical Paper - Finalist, Best Contributed Applied Paper - Finalist Yongseok Jeon and Sara Shashaani (North Carolina State University), Eunshin Byon (University of Michigan), and Pranav Jain (North Carolina State University) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Metamodeling Abstract AbstractThis study presents an approach to dynamically calibrate a digital twin to support decision-making in systems operating under uncertainty. The framework integrates uncertainty by turning a physics-based model into a stochastic simulation, where independent variables that represent environmental conditions may be nonstationary whereas target variables are conditionally stationary. Calibration itself is formulated as a simulation optimization problem that we solve using a root-finding strategy. As a case study, we apply the framework to the prediction of short-term power deficit, known as the wake effect, in wind farms using real-world data and demonstrate the robustness of the proposed framework. Besides advancing the digital twin research, the presented methodology is expected to impact wind farm wake steering strategy by enabling accurate short-term wake effect prediction. pdfExploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfGeneral-Purpose Ranking and Selection for Stochastic Simulation with Streaming Input Data Jaime Gonzalez-Hodar and Eunhye Song (Georgia Institute of Technology) Program Track: Simulation Optimization Program Tags: Data Driven, Metamodeling, Ranking and Selection Abstract AbstractWe study ranking and selection (R&S) where the simulator’s input models are increasingly more precisely estimated from the streaming data obtained from the system. The goal is to decide when to stop updating the model and return the estimated optimum with a probability of good selection (PGS) guarantee. We extend the general-purpose R&S procedure by Lee and Nelson by integrating a metamodel that represents the input uncertainty effect on the simulation output performance measure. The algorithm stops when the estimated PGS is no less than 1−α accounting for both prediction error in the metamodel and input uncertainty. We then propose an alternative procedure that terminates significantly earlier while still providing the same (approximate) PGS guarantee by allowing the performance measures of inferior solutions to be estimated with lower precision than those of good solutions. Both algorithms can accommodate nonparametric input models and/or performance measures other than the means (e.g., quantiles). pdf
A Formal and Deployable Gaming Operation to Defend IT/OT Networks Ranjan Pal, Lillian Bluestein, Tilek Askerbekov, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Military and National Security Applications Program Tags: Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo Abstract AbstractThe cyber vulnerability terrain is largely amplified in critical infrastructure systems (CISs) that attract exploitative (nation-state) adversaries. This terrain is layered over an IT and IoT-driven operational technology (OT) network that supports CIS software applications and underlying protocol communications. Usually, the network is too large for both cyber adversaries and defenders to control every network resource under budget constraints. Hence, both sides strategically want to target 'crown jewels' (i.e., critical network resources) as points of control in the IT/OT network. Going against traditional CIS game theory literature that idealistically (impractically) model attacker-defense interactions, we are the first to formally model real-world adversary-defender strategic interactions in CIS networks as a simultaneous non-cooperative network game with an auction contest success function (CSF) to derive the optimal defender strategy at Nash equilibria. We compare theoretical game insights with those from large-scale Monte Carlo game simulations and propose CIS-managerial cyber defense action items. pdfA New Stochastic Approximation Method for Gradient-based Simulated Parameter Estimation Zehao Li and Yijie Peng (Peking University) Program Track: Simulation Optimization Program Tags: Monte Carlo, Sampling Abstract AbstractThis paper tackles the challenge of parameter calibration in stochastic models, particularly in scenarios where the likelihood function is unavailable in an analytical form. We introduce a gradient-based simulated parameter estimation framework, which employs a multi-time scale stochastic approximation algorithm. This approach effectively addresses the ratio bias that arises in both maximum likelihood estimation and posterior density estimation problems. The proposed algorithm enhances estimation accuracy and significantly reduces computational costs, as demonstrated through extensive numerical experiments. Our work extends the GSPE framework to handle complex models such as hidden Markov models and variational inference-based problems, offering a robust solution for parameter estimation in challenging stochastic environments. pdfAI on Small and Noisy Data is Ineffective For ICS Cyber Risk Management Best Contributed Theoretical Paper - Finalist Yaphet Lemiesa, Ranjan Pal, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Simulation and Artificial Intelligence Program Tags: Cybersecurity, Data Driven, Monte Carlo Abstract AbstractModern industrial control systems (ICSs) are increasingly relying upon IoT and CPS technology to improve cost-effective service performance at scale. Consequently, the cyber vulnerability terrain is largely amplified in ICSs. Unfortunately, the historical lack of (a) sufficient, non-noisy ICS cyber incident data, and (b) intelligent operational business processes to collect and analyze available ICS cyber incident data, demands the attention of the Bayesian AI community to develop cyber risk management (CRM) tools to address these challenges. In this paper we show with sufficient Monte Carlo simulation evidence that Bayesian AI on noisy (and small) ICS cyber incident data is ineffective for CRM. More specifically, we show via a novel graphical sensitivity analysis methodology that even small amounts of statistical noise in cyber incident data are sufficient to reduce ICS intrusion/anomaly detection performance by a significant percentage. Hence, ICS management processes should strive to collect sufficient non-noisy cyber incident data. pdfCentral Limit Theorem for a Randomized Quasi-Monte Carlo Estimator of a Smooth Function of Means Marvin K. Nakayama (New Jersey Institute of Technology), Bruno Tuffin (Inria), and Pierre L'Ecuyer (Université de Montréal) Program Track: Analysis Methodology Program Tags: Monte Carlo, Output Analysis, Variance Reduction Abstract AbstractConsider estimating a known smooth function (such as a ratio) of unknown means. Our paper accomplishes this by first estimating each mean via randomized quasi-Monte Carlo and then evaluating the function at the estimated means. We prove that the resulting plug-in estimator obeys a central limit theorem by first establishing a joint central limit theorem for a triangular array of estimators of the vector of means and then employing the delta method. pdfComputing Estimators of a Quantile and Conditional Value-at-Risk Sha Cao, Truong Dang, James M. Calvin, and Marvin K. Nakayama (New Jersey Institute of Technology) Program Track: Analysis Methodology Program Tags: Monte Carlo, Rare Events, Sampling Abstract AbstractWe examine various sorting and selection methods for computing quantile and the conditional value-at-risk, two of the most commonly used risk measures in risk management scenarios. We study the situation where simulation data is already pre-generated, and perform timing experiments on calculating risk measures on the existing datasets. Through numerical experiments, approximate analyses, and existing theoretical results, we find that selection generally outperforms sorting, but which selection strategy runs fastest depends on several factors. pdfFast Monte Carlo Irene Aldridge (Cornell University) Program Track: Modeling Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractThis paper proposes an eigenvalue-based small-sample approximation of the celebrated Markov Chain Monte Carlo that delivers an invariant steady-state distribution that is consistent with traditional Monte Carlo methods. The proposed eigenvalue-based methodology reduces the number of paths required for Monte Carlo from as many as 1,000,000 to as few as 10 (depending on the simulation time horizon T), and delivers comparable, distributionally robust results, as measured by the Wasserstein distance. The proposed methodology also produces a significant variance reduction in the steady-state distribution. pdfGNN-Heatmap Augmented Monte Carlo Tree Search for Cloud Workflow Scheduling Best Contributed Applied Paper - Finalist Dingyu Zhou, Jiaqi Huang, Yirui Zhang, and Wai Kin (Victor) Chan (Tsinghua University) Program Track: Simulation and Artificial Intelligence Program Tags: Monte Carlo, Neural Networks, Python Abstract AbstractThis paper addresses the NP-hard cloud workflow scheduling problem by proposing a novel method that integrates Graph Neural Networks with Monte Carlo Tree Search (MCTS). Cloud workflows, represented as Directed Acyclic Graphs, present significant scheduling challenges due to complex task dependencies and heterogeneous resource requirements. Our method leverages Anisotropic Graph Neural Networks to extract structural features from workflow and create a heatmap that guides the MCTS process during both the selection and simulation phases. Extensive experiments on workflows ranging from 30 to 110 tasks demonstrate that our method outperforms rule-based algorithms, classic MCTS, and other learning-based approaches; more notably, it achieves near-optimal solutions with only a 2.56% gap from exact solutions and demonstrates exceptional scalability to completely unseen workflow sizes. This synergistic integration of neural network patterns with Monte Carlo simulation-based search not only advances cloud workflow scheduling but also offers valuable insights for simulation-based optimization across diverse domains. pdfImportance Sampling for Latent Dirichlet Allocation Best Contributed Theoretical Paper - Finalist Paul Glasserman and Ayeong Lee (Columbia University) Program Track: Analysis Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractLatent Dirichlet Allocation (LDA) is a method for finding topics in text data. Evaluating an LDA model entails estimating the expected likelihood of held-out documents. This is commonly done through Monte Carlo simulation, which is prone to high relative variance. We propose an importance sampling estimator for this problem and characterize the theoretical asymptotic statistical efficiency it achieves in large documents. We illustrate the method in simulated data and in a dataset of news articles. pdfMulti-Fidelity Stochastic Trust Region Method with Adaptive Sampling Yunsoo Ha and Juliane Mueller (National Renewable Energy Laboratory) Program Track: Simulation Optimization Program Tag: Monte Carlo Abstract AbstractSimulation optimization is often hindered by the high cost of running simulations. Multi-fidelity methods offer a promising solution by incorporating cheaper, lower-fidelity simulations to reduce computational time. However, the bias in low-fidelity models can mislead the search, potentially steering solutions away from the high-fidelity optimum. To overcome this, we propose ASTRO-MFDF, an adaptive sampling trust-region method for multi-fidelity simulation optimization. ASTRO-MFDF features two key strategies: (i) it adaptively determines the sample size and selects { appropriate sampling strategies to reduce computational cost}; and (ii) it selectively uses low-fidelity information only when a high correlation with the high-fidelity is anticipated, reducing the risk of bias. We validate the performance and computational efficiency of ASTRO-MFDF through numerical experiments using the SimOpt library. pdfOptimization of Queueing Systems Using Streaming Simulation Robert James Lambert, James Grant, and Rob Shone (Lancaster University) and Roberto Szechtman (Naval Postgraduate School) Program Track: Simulation Optimization Program Tags: Data Driven, Monte Carlo Abstract AbstractWe consider the problem of adaptively determining the optimal number of servers in an M/G/c queueing system in which the unknown arrival rate must be estimated using data that arrive sequentially over a series of observation periods. We propose a stochastic simulation-based approach that uses iteratively updated parameters within a greedy decision-making policy, with the selected number of servers minimising a Monte Carlo estimate of a chosen objective function. Under minimal assumptions, we derive a central limit theorem for the Monte Carlo estimator and derive an asymptotic bound on the probability of incorrect selection of the policy. We also demonstrate the empirical performance of the policy in a finite-time numerical experiment. pdfPySIRTEM: An Efficient Modular Simulation Platform For The Analysis Of Pandemic Scenarios Preetom Kumar Biswas, Giulia Pedrielli, and K. Selçuk Candan (Arizona State University) Program Track: Modeling Methodology Program Tags: DOE, Monte Carlo, Python Abstract AbstractConventional population-based ODE models struggle against increased level of resolution since incorporating many states exponentially increases computational costs, and demands robust calibration for numerous hyperparameters. PySIRTEM is a spatiotemporal SEIR-based epidemic simulation platform that provides high resolution analysis of viral disease progression and mitigation. Based on the authors-developed Matlab© simulator SIRTEM, PySIRTEM’s modular design reflects key health processes, including infection, testing, immunity, and hospitalization, enabling flexible manipulation of transition rates. Unlike SIRTEM, PySIRTEM uses a Sequential Monte Carlo (SMC) particle filter to dynamically learn epidemiological parameters using historical COVID-19 data from several U.S. states. The improved accuracy (by orders of magnitude) make PySIRTEM ideal for informed decision-making by detecting outbreaks and fluctuations. We further demonstrate PySIRTEM ’s usability performing a factorial analysis to assess the impact of different hyperparameter configurations on the predicted epidemic dynamics. Finally, we analyze
containment scenarios with varying trends, showcasing PySIRTEM ’s adaptability and effectiveness. pdfStatistical Properties of Mean-Variance Portfolio Optimization Zhaolin Hu (Tongji University) Program Track: Simulation Optimization Program Tag: Monte Carlo Abstract AbstractWe study Markowitz’s mean-variance portfolio optimization problem. When practically using this model, the mean vector and the covariance matrix of the assets returns often need to be estimated from the sample data. The sample errors will be propagated to the optimization output. In this paper, we consider three commonly used mean-variance models and build the asymptotic properties for the conventional sample approximations that are widely adopted and studied, by leveraging the stochastic optimization theory. We show that for all three models, under certain conditions the sample approximations have the desired consistency and achieve a convergence rate of square root of sample size, and the asymptotic variance depends on the first four moments of the returns. We conduct numerical experiments to test the asymptotic properties for the estimation. We also conduct experiments to illustrate that the asymptotic normality might not hold when the fourth moments of the returns do not exist. pdfUsing Adaptive Basis Search Method In Quasi-Regression To Interpret Black-Box Models Ambrose Emmett-Iwaniw and Christiane Lemieux (University of Waterloo) Program Track: Analysis Methodology Program Tags: Monte Carlo, R, Variance Reduction Abstract AbstractQuasi-Regression (QR) is an inference method that approximates a function of interest (e.g., black-box model) for interpretation purposes by a linear combination of orthonormal basis functions of $L^2[0,1]^{d}$. The coefficients are integrals that do not have an analytical solution and therefore must be estimated, using Monte Carlo or Randomized Quasi-Monte Carlo (RQMC). The QR method can be time-consuming if the number of basis functions is large. If the function of interest is sparse, many of these basis functions are irrelevant and could thus be removed, but they need to be correctly identified first. We address this challenge by proposing new adaptive basis search methods based on the RQMC method that adaptively select important basis functions. These methods are shown to be much faster than previously proposed QR methods and are overall more efficient. pdf
Integrated RTS-RTD Simulation Framework for Semiconductor Manufacturing System Seongho Cho (Ajou University), Donguk Kim (LG Production and Research Institute), and Sangchul Park (Ajou University) Program Track: MASM: Semiconductor Manufacturing Program Tags: DEVS, MOZART LSE Abstract AbstractThe complexity of modern semiconductor fabrication (FAB) systems makes it difficult to implement integrated simulation systems that combine production and logistics simulators. As a result, these simulators have traditionally been developed independently. However, in actual FAB operations, information exchange between Real-Time Schedulers (RTS) and Real-Time Dispatchers (RTD) coordinates production activities. To address this issue, we propose a coupled RTS–RTD simulation framework that integrates production and logistics simulators into a unified environment. In addition, we introduce a dynamic decision-making rule that enables flexible responses when logistical constraints prevent execution of the original production schedule. Simulation experiments were conducted using the SMT2020 and SMAT2022 datasets. The results show that selectively following RTD decisions, instead of strictly adhering to RTS-generated schedules, can significantly improve production efficiency in FAB operations. pdf
Modeling Pedestrian Movement in a Crowd Context with Urgency Preemption Susan K. Aros (Naval Postgraduate School) and Dale Frakes (Portland State University) Program Track: Military and National Security Applications Program Tag: Netlogo Abstract AbstractRealistic crowd modeling is essential for military and security simulation models. In this paper we address modeling of the movement of people in the types of unstructured crowds that are common in civil security situations. Early approaches in the literature to simulating the movement of individuals in a crowd, typically treated the crowd as consisting of entities moving on a fixed grid, or as particles in a fluid flow, where the movement rules were relatively simple and each member had the same goal, such as to move along a crowded sidewalk or to evacuate through an exit. This paper proposes a 2-part approach for more complex pedestrian movement modeling that takes into account the cognitively-determined behavioral intent of each member of the crowd to determine their own movement objective while also allowing each to temporarily react to a short-term urgent situation that may arise while pursuing their movement goal. pdfSelf-Organization in Crowdsourced Food Delivery Systems Berry Gerrits and Martijn Mes (University of Twente) Program Track: Agent-based Simulation Program Tags: Complex Systems, Netlogo Abstract AbstractThis paper presents an open-source agent-based simulation model to study crowd-sourced last-mile food delivery. Within this context, we focus on a system that allows couriers with varying degrees of autonomy and cooperativeness to make decisions about accepting orders and strategically relocating. We model couriers as agents in an agent-based simulation model implemented in NetLogo. Our approach provides the necessary parameters to control and balance system performance in terms of courier productivity and delivery efficiency. Our simulation results show that moderate levels of autonomy and cooperation lead to improved performance, with significant gains in workload distribution and responsiveness to changing demand patterns. Our findings highlight the potential of self-organizing and decentralized strategies to improve scalability, adaptability, and fairness in platform-based food delivery logistics. pdf
3D Vision Based Anti.Collision System for Automatic Load Movements with Tower Cranes - A Simulation Oriented Development Process Alexander Schock-Schmidtke, Gonzalo Bernabé Caparrós, and Johannes Fottner (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tags: Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks Abstract AbstractThis paper presents a simulation-driven development approach for a camera-based anti-collision system designed for automated tower cranes. Stereo camera systems mounted directly on the crane's hook generate real-time 3D point clouds to detect people in the immediate danger zone of suspended loads. A virtual construction site was implemented in a game engine to simulate dynamic scenarios and varying weather conditions. The system utilizes a neural network for pedestrian detection and computes the minimum distance between load and detected persons. A closed-loop architecture enables real-time data exchange between simulation and processing components and allows easy transition to real-world cranes. The system was evaluated under different visibility conditions, showing high detection accuracy in clear weather and degraded performance in fog and rain due to the limitations of stereo vision. The results demonstrate the feasibility of using synthetic environments and point cloud-based perception to develop safety-critical assistance systems in construction automation. pdfAI-based Assembly Line Optimization in Aeronautics: a Surrogate and Genetic Algorithm Approach Maryam SAADI (Airbus Group, IMT Ales); Vincent Bernier (Airbus Group); and Gregory Zacharewicz and Nicolas Daclin (IMT) Program Track: Simulation and Artificial Intelligence Program Tags: AnyLogic, Neural Networks, Python Abstract AbstractIndustrial configuration planning requires testing many setups, which is time-consuming when each scenario must be evaluated through detailed simulation. To accelerate this process, we train a Multi-Layer Perceptron (MLP) to predict key performance indicators (KPIs) quickly, using it as a surrogate model. However, classical regression metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) do not reflect prediction quality in all situations. To solve this issue, we introduce a classification-based evaluation strategy. We define acceptable prediction margins based on business constraints, then convert the regression output into discrete classes. We assess model performance using precision and recall. This approach reveals where the model makes critical errors and helps decision-makers at Airbus Helicopters trust the AI’s predictions. pdfAdvancing Military Decision Support: Reinforcement Learning-Driven Simulation for Robust Operational Plan Validation Michael Möbius and Daniel Kallfass (Airbus Defence and Space) and Stefan Göricke and Thomas Manfred Doll (German Armed Forces (Bundeswehr)) Program Track: Military and National Security Applications Program Tag: Neural Networks Abstract AbstractThe growing complexity of modern warfare demands advanced AI-driven decision support for validating Operational Plans (OPLANs). This paper proposes a multi-agent reinforcement learning framework integrated into the ReLeGSim environment to rigorously test military strategies under dynamic conditions. The adoption of deep reinforcement learning enables agents to learn optimal behavior within operational plans, transforming them into “intelligent executors”. By observing these agents, one can identify vulnerabilities within plans. Key innovations include: (1) a hybrid approach combining action masking for strict OPLAN adherence with interleaved behavior cloning to embed military doctrine; (2) a sequential training approach where agents first learn baseline tactics before evaluating predefined plans; and (3) data farming techniques using heatmaps and key performance indicators to visualize strategic weaknesses. Experiments show hard action masking outperforms reward shaping for constraint enforcement. This work advances scalable, robust AI-driven OPLAN validation through effective domain knowledge integration. pdfAn Empirical Study on the Assessment of Demand Forecasting Reliability for Fabless Semiconductor Companies In-Guk Choi and Seon-Young Hwang (korea advanced institute of science and technology); Jeongsun Ahn, Jehun Lee, and Sanghyun Joo (Korea Advanced Institute of Science and Technology); Kiung Kim, Haechan Lee, and Yoong Song (Samsumg Electronics); and Hyung-Jung Kim (Korea Advanced Institute of Science and Technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Supply Chain Abstract AbstractFabless semiconductor companies—semiconductor design experts without their factories—serve as the essential bridge between sophisticated customer needs and technological innovations, playing a pivotal role in the semiconductor supply chain. At these companies, planning teams receive demand forecasts from the sales team and develop production plans that consider inventory, capacity, and lead time. However, due to the inherent characteristics of the semiconductor industry—high demand volatility, short product cycles, and extended lead times—a substantial gap often exists between sales forecasts and actual demand. Consequently, evaluating forecast reliability is critical for planning teams that rely solely on sales forecasts for production planning. In this paper, we propose a novel machine learning framework that assesses forecast reliability by classifying demand forecasts as either overestimates or underestimates rather than using regression methods. Experimental results confirm its effectiveness in assessing forecast reliability. pdfBridging the Gap: A Practical Guide to Implementing Deep Reinforcement Learning Simulation in Operations Research with Gymnasium Konstantinos Ziliaskopoulos, Alexander Vinel, and Alice E. Smith (Auburn University) Program Track: Introductory Tutorials Program Tags: Neural Networks, Python, Supply Chain Abstract AbstractDeep Reinforcement Learning (DRL) has shown considerable promise in addressing complex sequential decision-making tasks across various fields, yet its integration within Operations Research (OR) remains limited despite clear methodological compatibility. This paper serves as a practical tutorial aimed at bridging this gap, specifically guiding simulation practitioners and researchers through the process of developing DRL environments using Python and the Gymnasium library. We outline the alignment between traditional simulation model components, such as state and action spaces, objective functions, and constraints, and their DRL counterparts. Using an inventory control scenario as an illustrative example, which is also available online through our GitHub repository, we detail the steps involved in designing, implementing, and integrating custom DRL environments with contemporary DRL algorithms. pdfGNN-Heatmap Augmented Monte Carlo Tree Search for Cloud Workflow Scheduling Best Contributed Applied Paper - Finalist Dingyu Zhou, Jiaqi Huang, Yirui Zhang, and Wai Kin (Victor) Chan (Tsinghua University) Program Track: Simulation and Artificial Intelligence Program Tags: Monte Carlo, Neural Networks, Python Abstract AbstractThis paper addresses the NP-hard cloud workflow scheduling problem by proposing a novel method that integrates Graph Neural Networks with Monte Carlo Tree Search (MCTS). Cloud workflows, represented as Directed Acyclic Graphs, present significant scheduling challenges due to complex task dependencies and heterogeneous resource requirements. Our method leverages Anisotropic Graph Neural Networks to extract structural features from workflow and create a heatmap that guides the MCTS process during both the selection and simulation phases. Extensive experiments on workflows ranging from 30 to 110 tasks demonstrate that our method outperforms rule-based algorithms, classic MCTS, and other learning-based approaches; more notably, it achieves near-optimal solutions with only a 2.56% gap from exact solutions and demonstrates exceptional scalability to completely unseen workflow sizes. This synergistic integration of neural network patterns with Monte Carlo simulation-based search not only advances cloud workflow scheduling but also offers valuable insights for simulation-based optimization across diverse domains. pdfGenVision: Enhancing Construction Safety Monitoring with Synthetic Image Generation Jiuyi Xu (Colorado School of Mines), Meida Chen (USC Institute for Creative Technologies), and Yangming Shi (Colorado School of Mines) Program Track: Project Management and Construction Program Tags: Neural Networks, Python Abstract AbstractThe development of object detection models for construction safety is often limited by the availability of high-quality, annotated datasets. This study explores the use of synthetic images generated by DALL·E 3 to supplement or partially replace real data in training YOLOv8 for detecting construction-related objects. We compare three dataset configurations: real-only, synthetic-only, and a mixed set of real and synthetic images. Experimental results show that the mixed dataset consistently outperforms the other two across all evaluation metrics, including precision, recall, IoU, and mAP@0.5. Notably, detection performance for occluded or ambiguous objects such as safety helmets and vests improves with synthetic data augmentation. While the synthetic-only model shows reasonable accuracy, domain differences limit its effectiveness when used alone. These findings suggest that high-quality synthetic data can reduce reliance on real-world data and enhance model generalization, offering a scalable approach for improving construction site safety monitoring systems. pdfGraph-Based Reinforcement Learning for Dynamic Photolithography Scheduling Sang-Hyun Cho, Sohyun Jeong, and Jimin Park (korea advanced institute of science and technology); Boyoon Choi and Paul Han (Samsung Display); and Hyun-Jung Kim (korea advanced institute of science and technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Python Abstract AbstractThis paper addresses the photolithography process scheduling problem, a critical bottleneck in both display and semiconductor production. In display manufacturing, as the number of deposited layers increases and reentrant operations become more frequent, the complexity of scheduling processes has significantly increased. Additionally, growing market demand for diverse product types underscores the critical need for efficient scheduling to enhance operational efficiency and meet due dates. To address these challenges, we propose a novel graph-based reinforcement learning framework that dynamically schedules photolithography operations in real time, explicitly considering mask locations, machine statuses, and associated transfer times. Through numerical experiments, we demonstrate that our method achieves consistent and robust performance across various scenarios, making it a practical solution for real-world manufacturing systems. pdfHierarchical Population Synthesis Using a Neural-Differentiable Programming Approach Imran Mahmood Q. Hashmi, Anisoara Calinescu, and Michael Wooldridge (University of Oxford) Program Track: Agent-based Simulation Program Tags: Complex Systems, Neural Networks, Open Source, Python Abstract AbstractAdvances in Artificial Intelligence have enabled more accurate and scalable modelling of complex social systems, which depend on realistic high-resolution population data. We introduce a novel methodology for generating hierarchical synthetic populations using differentiable programming, producing detailed demographic structures essential for simulation and analysis. Existing approaches struggle to model hierarchical population structures and optimise over discrete demographic attributes. Leveraging feed-forward neural networks and Gumbel-Softmax encoding, our approach transforms aggregated census and survey data into continuous, differentiable forms, enabling gradient-based optimisation to match target demographics with high fidelity. The framework captures multi-scale population structures, including household composition and socio-economic diversity, with verification via logical rules and validation against census cross tables. A UK case study shows our model closely replicates real-world distributions. This scalable approach provides simulation modellers and analysts with, high-fidelity synthetic populations as input for agent-based simulations of complex societal systems, enabling behaviour simulation, intervention evaluation, and demographic analysis. pdfLLM Assisted Value Stream Mapping Micha Jan Aron Selak, Dirk Krechel, and Adrian Ulges (RheinMain University of Applied Sciences) and Sven Spieckermann, Niklas Stoehr, and Andreas Loehr (SimPlan AG) Program Track: Modeling Methodology Program Tags: Neural Networks, Supply Chain Abstract AbstractThe correct design of digital value stream models is an intricate task, which can be challenging especially for untrained or inexperienced users. We address the question whether large language models can be adapted to "understand" value stream’s structure and act as modeling assistants, which could support users with repairing errors and adding or configuring process steps in order to create valid value stream maps that can be simulated. Specifically, we propose a domain-specific multi-task training process, in which an instruction-tuned large language model is fine-tuned to yield specific information on its input value stream or to fix scripted modeling errors. The resulting model – which we coin Llama-VaStNet – can manipulate value stream structures given user requests in natural language. We demonstrate experimentally that Llama-VaStNet outperforms its domain-agnostic vanilla counterpart, i.e. it is 19% more likely to produce correct individual manipulations. pdfModel Validation and LLM-based Model Enhancement for Analyzing Networked Anagram Experiments Hao He, Xueying Liu, and Xinwei Deng (Virginia Tech) Program Track: Modeling Methodology Program Tags: Neural Networks, Validation Abstract AbstractAgent-based simulations for networked anagram games, often taking advantage of the experimental data, are useful tools to investigate collaborative behaviors. To confidently incorporate the statistical analysis from the experimental data into the ABS, it is crucial to conduct sufficient validation for the underlying statistical models. In this work, we propose a systematic approach to evaluate the validity of statistical methods of players’ action sequence modeling for the networked anagram experiments. The proposed method can appropriately quantify the effect and validity of expert-defined covariates for modeling the players’ action sequence data. We further develop a Large Language Model (LLM)-guided method to augment the covariate set, employing iterative text summarization to overcome token limits. The performance of the proposed methods is evaluated under different metrics tailored for imbalanced data in networked anagram experiments. The results highlight the potential of LLM-driven feature discovery to refine the underlying statistical models used in agent-based simulations. pdfMulti-fidelity Simulation Framework for the Strategic Pooling of Surgical Assets Sean Shao Wei Lam (Singapore Health Services, Duke NUS Medical School); Boon Yew Ang (Singapore Health Services); Marcus Eng Hock Ong (Singapore Health Services, Duke NUS Medical School); and Hiang Khoon Tan (Singapore General Hospital, SingHealth Duke-NUS Academic Medicine Centre) Program Track: Healthcare and Life Sciences Program Tags: Neural Networks, Python Abstract AbstractThis study describes a multi-fidelity simulation framework integrating a high-fidelity discrete event simulation (DES) model with a machine learning (ML)-based low-fidelity model to optimize operating theatre (OT) scheduling in a major public hospital in Singapore. The high-fidelity DES model is trained and validated with real-world data and the low-fidelity model is trained and validated with synthetic data derived from simulation runs with the DES model. The high-fidelity model captures system complexities and uncertainties while the low-fidelity model facilitates policy optimization via the multi-objective non-dominated sorting genetic algorithm (NSGA-II). The optimization algorithm can identify Pareto-optimal policies under varying open access (OA) periods and strategies. Pareto optimal policies are derived across the dual objectives in maximizing OT utilization (OTU) and minimizing waiting time to surgery (WTS). These policies support post-hoc evaluation within an integrated decision support system (DSS). pdfTask-Aware Multi-Expert Architectures for Lifelong Deep Learning Jianyu Wang and JACOB NEAN-HUA SHEIKH (George Mason University), Cat P. Le (Duke University), and Hoda Bidkhori (George Mason University) Program Track: Simulation and Artificial Intelligence Program Tag: Neural Networks Abstract AbstractLifelong deep learning aims to enable neural networks to continuously learn across tasks while retaining previously acquired knowledge. This paper introduces an algorithm, Task-Aware Multi-Expert (TAME), which facilitates incremental and collaborative learning by leveraging task similarity to guide expert model selection and knowledge transfer. TAME retains a pool of pretrained neural networks and selectively activates the most relevant expert based on task similarity metrics. A shared dense layer then utilizes the appropriate expert's knowledge to generate a prediction. To mitigate catastrophic forgetting, TAME incorporates a limited-capacity replay buffer that stores representative samples and their embeddings from each task. Furthermore, an attention mechanism is integrated to dynamically prioritize the most relevant stored knowledge for each new task. The proposed algorithm is both flexible and adaptable across diverse learning scenarios. Experimental results on classification tasks derived from CIFAR-100 demonstrate that TAME significantly enhances classification performance while preserving knowledge across evolving task sequences. pdf
A Simulation-Based Evaluation of Strategies for Communicating Appointment Slots to Outpatients Aparna Venkataraman (University of Queensland, Indian Institute of Technology Delhi); Sisira Edirippulige (University of Queensland); and Varun Ramamohan (Indian Institute of Technology Delhi) Program Track: Healthcare and Life Sciences Program Tags: Open Source, Python Abstract AbstractIn this paper, we consider an outpatient consultation scheduling system with equal-length slots wherein a set of slots each day are reserved for walk-ins. Specifically, we consider the following questions in deciding slot start times to communicate to scheduled patients: (a) should information regarding patient arrival with respect to the slot start time communicated to them (arrival offset with respect to slot start – i.e., are they typically late or early) be considered in deciding the slot start time for communication, and (b) what impact does rounding the slot start time to the nearest 5th or 10th minute have on relevant outcomes? We answer these questions using a validated discrete-event simulation of an FCFS outpatient appointment system in a hospital accommodating both scheduled and walk-in patients. We also describe the development of the simulation itself, which is designed to optimize policies regarding management of walk-in patients and integration of telemedicine. pdfA Tutorial on Resource Modeling Using the Kotlin Simulation Library Manuel D. Rossetti (University of Arkansas) Program Track: Introductory Tutorials Program Tag: Open Source Abstract AbstractThe Kotlin Simulation Library (KSL) is an open-source library written in the Kotlin programming language that facilitates Monte Carlo and discrete-event simulation modeling. The library provides an API framework for developing, executing, and analyzing models using both the event view and the process view modeling perspectives. This paper provides a tutorial on modeling with resources within simulation models. The KSL will be utilized to illustrate important concepts that every simulation modeler should understand within the context of modeling resources within a simulation model. A general discussion of resource modeling concepts is presented. Then, examples are used to illustrate how to put the concepts into practice. While the concepts will be presented within the context of the KSL, the ideas should be important to users of other simulation languages. This tutorial provides both an overview of resource modeling constructs within the KSL and presents tutorial examples. pdfBuilding a Climate Responsive Agent-Based Modeling Simulation for the Walkability of the Tropical Hot and Humid Environment Daniel Jun Chung Hii and Takamasa Hasama (Kajima Corporation); Majid Sarvi (The University of Melbourne); and Marcel Ignatius, Joie Yan Yee Lim, Yijun Lu, and Nyuk Hien Wong (National University of Singapore) Program Track: Environment, Sustainability, and Resilience Program Tags: AnyLogic, Open Source Abstract AbstractClimate change affects thermal comfort and wellness by restricting walkability potential of the built environment. This is especially in the outdoors under the harsh solar radiation exposure of the tropical hot and humid climate. Passive shading strategy plays the most significant role in the walkability potential. Vegetation and man-made structures such as pavements provide shade for comfortable navigation, with the latter being a more sustainable and wellbeing friendly solution. The walkability potential can be simulated using agent-based modelling (ABM) technique. As a heat mitigation strategy to improve the walkability, the most direct intervention is to improve the connectivity of the shading zone along the shortest path between strategic locations. People tend to walk faster and choose the shortest path when dealing with direct sun exposure while avoiding it totally if it gets unbearably hot. The ABM simulation is useful for efficient urban planning of walkability potential in campus. pdfHierarchical Population Synthesis Using a Neural-Differentiable Programming Approach Imran Mahmood Q. Hashmi, Anisoara Calinescu, and Michael Wooldridge (University of Oxford) Program Track: Agent-based Simulation Program Tags: Complex Systems, Neural Networks, Open Source, Python Abstract AbstractAdvances in Artificial Intelligence have enabled more accurate and scalable modelling of complex social systems, which depend on realistic high-resolution population data. We introduce a novel methodology for generating hierarchical synthetic populations using differentiable programming, producing detailed demographic structures essential for simulation and analysis. Existing approaches struggle to model hierarchical population structures and optimise over discrete demographic attributes. Leveraging feed-forward neural networks and Gumbel-Softmax encoding, our approach transforms aggregated census and survey data into continuous, differentiable forms, enabling gradient-based optimisation to match target demographics with high fidelity. The framework captures multi-scale population structures, including household composition and socio-economic diversity, with verification via logical rules and validation against census cross tables. A UK case study shows our model closely replicates real-world distributions. This scalable approach provides simulation modellers and analysts with, high-fidelity synthetic populations as input for agent-based simulations of complex societal systems, enabling behaviour simulation, intervention evaluation, and demographic analysis. pdfMapping Applications of Computer Simulation in Orthopedic Services: A Topic Modeling Approach Alison L. Harper, Thomas Monks, Navonil Mustafee, and Jonathan T. Evans (University of Exeter) and Al-Amin Kassam (Royal Devon University Healthcare) Program Track: Healthcare and Life Sciences Program Tags: Data Analytics, Open Source Abstract AbstractOrthopedic health services are characterized by high patient volumes, long elective waits, unpredictable emergency demand, and close coupling with other hospital processes. These present significant challenges for meeting operational targets and maintaining quality of care. In healthcare, simulation has been widely used for addressing similar challenges. This systematic scoping review identifies and analyzes academic papers using simulation to address operational-level challenges for orthopedic service delivery. We analyzed 37 studies over two decades, combining a structured analysis with topic modelling to categorize and map applications. Despite widespread recognition of its potential, simulation remains underutilized in orthopedics, with fragmented application and limited real-world implementation. Recent trends indicate a shift toward system-wide approaches that better align with operational realities and stakeholder needs. Future research should aim to bridge methodological innovation with collaboration and practical application, such as hybrid and real-time simulation approaches focusing on stakeholder needs, and integrating relevant operational performance metrics. pdfMulti-agent Market Simulation for Deep Reinforcement Learning With High-Frequency Historical Order Streams David Byrd (Bowdoin College) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Open Source, Python Abstract AbstractAs artificial intelligence rapidly co-evolves with complex modern systems, new simulation frameworks are needed to explore the potential impacts. In this article, I introduce a novel open source multi-agent financial market simulation powered by raw historical order streams at nanosecond resolution. The simulation is particularly targeted at deep reinforcement learning, but also includes momentum, noise, order book imbalance, and value traders, any number and type of which may simultaneously trade against one another and the historical order stream within the limit order books of the simulated exchange. The simulation includes variable message latency, automatic agent computation delays sampled in real time, and built-in tools for performance logging, statistical analysis, and plotting. I present the simulation features and design, demonstrate the framework on a multipart DeepRL use case with continuous actions and observations, and discuss potential future work. pdfTargeted Household Quarantining: Enhancing the Efficiency of Epidemic Response Johannes Ponge (University of Münster), Julian Patzner (Martin Luther University Halle-Wittenberg), and Bernd Hellingrath and André Karch (University of Münster) Program Track: Healthcare and Life Sciences Program Tag: Open Source Abstract AbstractNon-pharmaceutical interventions (NPIs) are the immediate public health reaction to emerging epidemics. While they generally help slow down infection dynamics, they can be associated with relevant socioeconomic costs, like lost school- or work days caused by preemptive household quarantines. However, research suggests that not all households contribute equally to the overall infection dynamics. In this study, we introduce the novel “Infection Contribution” metric that allows us to trace the involvement of particular household types over entire infection chains. Building upon the German Epidemic Microsimulation System, we quantify the impact of various household types, considering their size and composition in a COVID-19-like scenario. Additionally, we show how targeting interventions based on household characteristics produces efficient strategies, outperforming non-selective strategies in almost all scenarios. Our approach can be transferred to other NPIs, such as school closure, testing, or contact tracing, and even inform the prioritization of vaccinations. pdfTowards a DEVS-Based Simulation Engine for Digital Twin Applications Arnis Lektauers (Riga Technical University) Program Track: Simulation as Digital Twin Program Tags: DEVS, Open Source Abstract AbstractDigital twins (DT) are increasingly being adopted to improve system monitoring, prediction, and decision making in various domains. Although simulation plays a central role in many DT implementations, a lack of formal modeling foundations often leads to ad hoc and non-scalable solutions. This paper proposes a simulation engine for DT applications based on the Discrete Event System Specification (DEVS) formalism. DEVS provides a robust, modular, and hierarchical modeling framework suitable for modeling the structure and behavior of complex cyber-physical systems. A key contribution is the integration of the Parallel DEVS for Multicomponent Systems (multiPDEVS) formalism with X-Machines to support state and memory separation for simulation models with the goal of improving model scalability and reusability, as well as providing a basis for integration with DTs. The paper presents the architectural design of the engine, highlights its main functional components, and demonstrates its capabilities using a preliminary use case. pdf
Central Limit Theorem for a Randomized Quasi-Monte Carlo Estimator of a Smooth Function of Means Marvin K. Nakayama (New Jersey Institute of Technology), Bruno Tuffin (Inria), and Pierre L'Ecuyer (Université de Montréal) Program Track: Analysis Methodology Program Tags: Monte Carlo, Output Analysis, Variance Reduction Abstract AbstractConsider estimating a known smooth function (such as a ratio) of unknown means. Our paper accomplishes this by first estimating each mean via randomized quasi-Monte Carlo and then evaluating the function at the estimated means. We prove that the resulting plug-in estimator obeys a central limit theorem by first establishing a joint central limit theorem for a triangular array of estimators of the vector of means and then employing the delta method. pdfEvaluating Comprehension of Agent-Based Social Simulation Visualization Techniques: A Framework Based on Statistical Literacy and Cognitive Processing Kotaro Ohori and Kyoko Kageura (Toyo University) and Shohei Yamane (Fujitsu Ltd.) Program Track: Agent-based Simulation Program Tags: Complex Systems, Output Analysis Abstract AbstractAgent-based social simulation (ABSS) has gained attention as a powerful method for analyzing complex social phenomena. However, the visualization of ABSS outputs is often difficult to interpret for users without expertise in ABSS modeling. This study analyzes how statistical literacy affects the comprehension of ABSS visualizations, based on cognitive processes defined in educational psychology. A web-based survey using five typical visualizations based on Schelling’s segregation model was conducted in Japan. The results showed a moderate positive correlation between statistical literacy and visualization comprehension, while some visualizations remained difficult to interpret even for participants with high literacy. Further machine learning analysis revealed that model performance varied by cognitive stage, and that basic and applied statistical skills had different impacts on comprehension across stages. These findings provide a foundation for designing visualizations tailored to user characteristics and offer insights for effective communication based on ABSS. pdfExploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfToward Automating System Dynamics Modeling: Evaluating LLMs in the Transition from Narratives to Formal Structures Jhon G. Botello (Virginia Modeling, Analysis, and Simulation Center) and Brian Llinas, Jose Padilla, and Erika Frydenlund (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Conceptual Modeling, Output Analysis, System Dynamics Abstract AbstractTransitioning from narratives to formal system dynamics (SD) models is a complex task that involves identifying variables, their interconnections, feedback loops, and the dynamic behaviors they exhibit. This paper investigates how large language models (LLMs), specifically GPT-4o, can support this process by bridging narratives and formal SD structures. We compare zero-shot prompting with chain-of-thought (CoT) iterations using three case studies based on well-known system archetypes. We evaluate the LLM’s ability to identify the systemic structures, variables, causal links, polarities, and feedback loop patterns. We present both quantitative and qualitative assessments of the results. Our study demonstrates the potential of guided reasoning to improve the transition from narratives to system archetypes. We also discuss the challenges of automating SD modeling, particularly in scaling to more complex systems, and propose future directions for advancing toward automated modeling and simulation in SD assisted by AI. pdf
DDA-PDES: A Data-Dependence Analysis Parallel Discrete-Event Simulation Framework for Event-Level Parallelization of General-Purpose DES Models Erik J. Jensen; James F. Leathrum, Jr.; Christopher J. Lynch; and Katherine Smith (Old Dominion University) and Ross Gore (Old Dominion University, Center for Secure and Intelligent Critical Systems) Program Track: Modeling Methodology Program Tags: C++, Parallel Abstract AbstractUtilizing data-dependence analysis (DDA) in parallel discrete-event simulation (PDES) to find event-level parallelism, we present the DDA-PDES framework as an alternative to spatial-decomposition (SD) PDES. DDA-PDES uses a pre-computed Independence Time Limit (ITL) table to efficiently identify events in the pending-event set that are ready for execution, in a shared-memory-parallel simulation engine. Experiments with AMD, Qualcomm, and Intel platforms using several packet-routing network models and a PHOLD benchmark model demonstrate speedup of up to 8.82x and parallel efficiency of up to 0.91. In contrast with DDA-PDES, experiments with similar network models in ROSS demonstrate that SD-PDES cannot speed up the packet-routing models without degradation to routing efficacy. Our results suggest DDA-PDES is an effective method for parallelizing discrete-event simulation models that are computationally intensive, and may be superior to traditional PDES methods for spatially-decomposed models with challenging communication requirements. pdfOptimizing Event Timestamp Processing in Time Warp Gaurav Shinde (STERIS, Inc); Sounak Gupta (Oracle, Inc); and Philip A. Wilsey (University of Cincinnati) Program Track: Modeling Methodology Program Tags: Distributed, Parallel Abstract Abstractwarped2 is a general purpose discrete event simulation kernel that contains a robust event time comparison mechanism to support a broad range of modeling domains. The warped2 kernel can be configured for sequential, parallel, or distributed execution. The parallel or distributed versions implement the Time Warp mechanism (with its rollback and relaxed causality) such that a total order on events can be maintained. To maintain a total order, warped2 has an event ordering mechanism that contains up to 10 comparison operations. While not all comparisons require evaluation of all 10 relations, the overall cost of time comparisons in a warped2 simulation can still consume approximately 15-20% of the total runtime. This work examines the runtime costs of time comparisons in a parallel configuration of the warped2 simulation kernel. Optimizations to the time comparison mechanism are explored and the performance impacts of each are reported. pdfScalable, Rule-Based, Parallel Discrete Event Based Agentic AI Simulations Atanu Barai, Stephan Eidenbenz, and Nandakishore Santhi (Los Alamos National Laboratory) Program Track: Simulation and Artificial Intelligence Program Tags: Parallel, Python Abstract AbstractWe introduce a novel parallel discrete event simulation (PDES)-based method to couple multiple AI and non-AI agents in a rule-based manner with dynamic constrains while ensuring correctness of output. Our coupling mechanism enables the agents to work in a co-operative environment towards a common goal while many sub-tasks run in parallel. AI agents trained on vast amounts of human data naturally model complex human behaviors and emotions easily – this is in contrast to conventional agents which need to be burdensomely complex to capture aspects of human behavior. Distributing smaller AI agents on a large heterogeneous CPU/GPU cluster enables extremely scalable simulation tapping into the collective complexity of individual smaller models, while circumventing local memory bottlenecks for model parameters. We illustrate the potential of our approach with examples from traffic simulation and robot gathering, where we find our
coupling of AI/non-AI agents improves overall fidelity. pdf
Integrating Expert Trustworthiness into Digital Twin Models Extracted from Expert Knowledge and Internet of Things Data: A Case Study in Reliability Michelle Jungmann (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Process Mining, Python Abstract AbstractThe extraction of Digital Twin models from both expert knowledge and Internet of Things data remains an underexplored area, with existing approaches typically being highly customized. Expert knowledge, provided by human experts, is influenced by individual experience, contextual understanding and domain-specific knowledge, leading to varying levels of uncertainty and trustworthiness. In this paper, we address the identified research gap by extending our previous work and introducing a novel approach that models and integrates expert trustworthiness into the extraction of what we term data-knowledge fused Digital Twin models. Key features of the approach are: quantifications of expert trustworthiness and algorithms for selecting and integrating knowledge into model extractions based on trustworthiness. We demonstrate our approach for quantifying and incorporating trustworthiness levels in a reliability modeling case study. pdfMulti-flow Process Mining as an Enabler for Comprehensive Digital Twins of Manufacturing Systems Atieh Khodadadi and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Petri Nets Abstract AbstractProcess Mining (PM) has proven useful for extracting Digital Twin (DT) simulation models for manufacturing systems. PM is a family of approaches designed to capture temporal process flows by analyzing event logs that contain time-stamped records of relevant events. With the widespread availability of sensors in modern manufacturing systems, events can be tracked across multiple process dimensions beyond time, enabling a more comprehensive performance analysis. Some of these dimensions include energy and waste. By integrating and treating these dimensions analogously to time, we enable the use of PM to extract process flows along multiple dimensions, an approach we refer to as multi-flow PM. The resulting models that capture multiple dimensions are ultimately combined to enable comprehensive DTs that support multi-objective decision-making. In this paper, we present our approach to generating these multidimensional discrete-event models and, through an illustrative case study, demonstrate how they can be utilized for multi-objective decision support. pdfPreserving Dependencies in Partitioned Digital Twin Models for Enabling Modular Validation Ashkan Zare (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Validation Abstract AbstractLeveraging Digital Twins, as near real-time replicas of physical systems, can help identify inefficiencies and optimize production in manufacturing systems. Digital Twins’ effectiveness, however, relies on continuous validation of the underlying models to ensure accuracy and reliability, which is particularly challenging for complex, multi-component systems where different components evolve at varying rates. Modular validation mitigates this challenge by decomposing models into smaller sub-models, allowing for tailored validation strategies. A key difficulty in this approach is preserving the interactions and dependencies among the sub-models while validating them individually; isolated validation may yield individually valid sub-models while failing to ensure overall model consistency. To address this, we build on our previously proposed modular validation framework and introduce an approach that enables sub-model validation while maintaining interdependencies. By ensuring that the validation process reflects these dependencies, our method enhances the effectiveness of Digital Twins in dynamic manufacturing environments. pdf
Automated Business Process Simulation Studies: Where do Humans Fit In? Samira Khraiwesh and Luise Pufahl (Technical University of Munich) Program Track: Data Science and Simulation Program Tags: Data Driven, Process Mining Abstract AbstractBusiness Process Simulation (BPS) is crucial for enhancing organizational efficiency and decision-making, enabling organizations to test process changes in a virtual environment without real-world consequences. Despite advancements in automatic simulation model discovery using process mining, BPS is still underused due to challenges in accuracy. Human-in-the-Loop (HITL) integrates human expertise into automated systems, where humans guide, validate, or intervene in the automation process to ensure accuracy and context. This paper introduces a framework identifying key stages in BPS studies where HITL can be applied and the factors influencing the degree of human involvement. The framework is based on a literature review and expert interviews, providing valuable insights and implications for researchers and practitioners. pdfData-driven Digital Twin for the Predictive Maintenance of Business Processes Paolo Bocciarelli and Andrea D'Ambrogio (University of Rome Tor Vergata) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Metamodeling, Process Mining Abstract AbstractThis paper presents a data-driven framework for the predictive maintenance of Business Processes based on the Digital Twin paradigm. The proposed approach integrates process mining techniques and a low-code develop approach to build reliability-aware simulation models from systems logs. These models are used to automatically generate executable DTs capable of predicting resource failures and estimating the Remaining Useful Life (RUL) of system components. The predictions are then exploited to trigger preventive actions or automated reconfigurations. The framework is implemented using the PyBPMN/eBPMN framework and evaluated on a manufacturing case study. Results show that the DT enables timely interventions, minimizes system downtimes, and ensures process continuity. pdfIntegrating Expert Trustworthiness into Digital Twin Models Extracted from Expert Knowledge and Internet of Things Data: A Case Study in Reliability Michelle Jungmann (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Process Mining, Python Abstract AbstractThe extraction of Digital Twin models from both expert knowledge and Internet of Things data remains an underexplored area, with existing approaches typically being highly customized. Expert knowledge, provided by human experts, is influenced by individual experience, contextual understanding and domain-specific knowledge, leading to varying levels of uncertainty and trustworthiness. In this paper, we address the identified research gap by extending our previous work and introducing a novel approach that models and integrates expert trustworthiness into the extraction of what we term data-knowledge fused Digital Twin models. Key features of the approach are: quantifications of expert trustworthiness and algorithms for selecting and integrating knowledge into model extractions based on trustworthiness. We demonstrate our approach for quantifying and incorporating trustworthiness levels in a reliability modeling case study. pdfModeling and Simulation of Surgical Procedures with an Application to Laparoscopic Cholycystectomy Yiyu Wang and Vincent Augusto (Ecole des Mines de Saint-Etienne), Canan Pehlivan (IMT Mines Albi), Julia Fleck (Ecole des Mines de Saint-Etienne), and Nesrine Mekhenane (Chaire Bopa) Program Track: Data Science and Simulation Program Tags: Process Mining, Python Abstract AbstractSurgeons’ actions are key to surgical success. Our objective is to develop a decision-support tool to help prioritize patient safety and reduce risks during surgery. We propose a structured mathematical framework that defines key components of a surgical procedure, making it adaptable to various types of surgeries. Using the CholecT50 dataset, we generate and pre-process event logs to construct a process map that models the surgical workflow through Process Mining techniques. This process map provides insights into procedural patterns and can be visualized at different levels of granularity to align with surgeons’ needs. To validate its effectiveness, we simulate synthetic surgeries and assess the process map’s performance in
replicating real surgical workflows. By demonstrating the generalizability of our approach, this work paves the way for the development of an advanced decision-support tool that can assist surgeons in real-time decision-making and post-operative analysis. pdf
A Digital Twin of Water Network for Exploring Sustainable Water Management Strategies Souvik Barat, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd); Gurudas Nulkar and Soomrit Chattopadhyay (Gokhale Institute of Politics and Economics, Pune); and Ashwini Keskar (Pune Knowledge Cluster, Pune) Program Track: Simulation as Digital Twin Program Tags: Emergent Behavior, Python, System Dynamics Abstract AbstractEfficient water management is an increasingly critical challenge for policymakers tasked with ensuring reliable water availability for agriculture, industry and domestic use while mitigating flood risks during monsoon seasons. This challenge is especially pronounced in regions where water networks rely primarily on rain-fed systems. Managing such water ecosystem is complex due to inherent constraints in water source, storage and flow, environmental uncertainties such as variable rainfall and evaporation, and increasing need for urbanization, industrial expansion and equity on interstate water sharing. In this study, we present a stock-and-flow-based simulatable digital twin designed to accurately represent the dynamics of a raindependent water network comprising dams, rivers and associated environmental and usage factors. The model supports scenario-based simulation and the evaluation of mitigation policies to enable evidencebased decision-making. We demonstrate the usefulness of our approach using a real water body network from western India that covers more than 300 km heterogeneous landscape. pdfA Simulation-Based Evaluation of Strategies for Communicating Appointment Slots to Outpatients Aparna Venkataraman (University of Queensland, Indian Institute of Technology Delhi); Sisira Edirippulige (University of Queensland); and Varun Ramamohan (Indian Institute of Technology Delhi) Program Track: Healthcare and Life Sciences Program Tags: Open Source, Python Abstract AbstractIn this paper, we consider an outpatient consultation scheduling system with equal-length slots wherein a set of slots each day are reserved for walk-ins. Specifically, we consider the following questions in deciding slot start times to communicate to scheduled patients: (a) should information regarding patient arrival with respect to the slot start time communicated to them (arrival offset with respect to slot start – i.e., are they typically late or early) be considered in deciding the slot start time for communication, and (b) what impact does rounding the slot start time to the nearest 5th or 10th minute have on relevant outcomes? We answer these questions using a validated discrete-event simulation of an FCFS outpatient appointment system in a hospital accommodating both scheduled and walk-in patients. We also describe the development of the simulation itself, which is designed to optimize policies regarding management of walk-in patients and integration of telemedicine. pdfA Simulation-enabled Framework for Mission Engineering Problem Definition: Integrating Ai-driven Knowledge Retrieval with Human-centered Design Rafi Soule and Barry C. E (Old Dominion University) Program Track: Modeling Methodology Program Tags: Complex Systems, Conceptual Modeling, Python Abstract AbstractMission Engineering (ME) requires coordination of multiple systems and stakeholders, but often suffers from unclear problem definitions, fragmented knowledge, and limited engagement. This paper proposes a hybrid methodology integrating Retrieval-Augmented Generation (RAG), Human-Centered Design (HCD), and Participatory Design (PD) within a Model-Based Systems Engineering (MBSE) framework. The approach generates context-rich, stakeholder-aligned mission problem statements, as demonstrated in the Spectrum Lab case study, ultimately improving mission effectiveness and stakeholder collaboration. pdfAI-based Assembly Line Optimization in Aeronautics: a Surrogate and Genetic Algorithm Approach Maryam SAADI (Airbus Group, IMT Ales); Vincent Bernier (Airbus Group); and Gregory Zacharewicz and Nicolas Daclin (IMT) Program Track: Simulation and Artificial Intelligence Program Tags: AnyLogic, Neural Networks, Python Abstract AbstractIndustrial configuration planning requires testing many setups, which is time-consuming when each scenario must be evaluated through detailed simulation. To accelerate this process, we train a Multi-Layer Perceptron (MLP) to predict key performance indicators (KPIs) quickly, using it as a surrogate model. However, classical regression metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) do not reflect prediction quality in all situations. To solve this issue, we introduce a classification-based evaluation strategy. We define acceptable prediction margins based on business constraints, then convert the regression output into discrete classes. We assess model performance using precision and recall. This approach reveals where the model makes critical errors and helps decision-makers at Airbus Helicopters trust the AI’s predictions. pdfASTROMoRF: Adaptive Sampling Trust-Region Optimization with Dimensionality Reduction Benjamin Wilson Rees, Christine S.M. Currie, and Vuong Phan (University of Southampton) Program Track: Simulation Optimization Program Tags: Metamodeling, Python Abstract AbstractHigh dimensional simulation optimization problems have become prevalent in recent years. In practice, the objective function is typically influenced by a lower dimensional combination of the original decision variables, and implementing dimensionality reduction can improve the efficiency of the optimization algorithm. In this paper, we introduce a novel algorithm ASTROMoRF that combines adaptive sampling with dimensionality reduction, using an iterative trust-region approach. Within a trust-region algorithm a series of surrogates or metamodels is built to estimate the objective function. Using a lower dimensional subspace reduces the number of design points needed for building a surrogate within each trust-region and consequently the number of simulation replications. We explain the basis for the algorithm within the paper and compare its finite-time performance with other state-of-the-art solvers. pdfAURORA: Enhancing Synthetic Population Realism Through RAG and Salience-Aware Opinion Modeling rebecca marigliano and Kathleen Carley (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tags: Data Driven, Input Modeling, Python Abstract AbstractSimulating realistic populations for strategic influence and social-cyber modeling requires agents that are demographically grounded, emotionally expressive, and contextually coherent. Existing agent-based models often fail to capture the psychological and ideological diversity found in real-world populations. This paper introduces AURORA, a Retrieval-Augmented Generation (RAG)-enhanced framework that leverages large language models (LLMs), semantic vector search, and salience-aware topic modeling to construct synthetic communities and personas. We compare two opinion modeling strategies and evaluate three LLMs—gemini-2.0-flash, deepseek-chat, and gpt-4o-mini—in generating emotionally and ideologically varied agents. Results show that community-guided strategies improve meso-level opinion realism, and LLM selection significantly affects persona traits and emotions. These findings demonstrate that principled LLM integration and salience-aware modeling can enhance the realism and strategic utility of synthetic populations for simulating narrative diffusion, belief change, and social response in complex information environments. pdfAdvanced Dynamic Spare Parts Inventory Management Utilizing Machine Health Data Best Contributed Applied Paper - Finalist Jennifer Kruman, Avital Kaufman, and Yale Herer (Technion) Program Track: Data Science and Simulation Program Tags: Python, Supply Chain Abstract AbstractThis research presents a novel approach to spare parts inventory management by integrating real-time machine health data with dynamic, state-dependent inventory policies. Traditional static models overlook the evolving conditions of industrial machinery. Leveraging advanced digital technologies, such as those pioneered by Augury, our framework dynamically adjusts inventory levels, reducing costs and improving service. Using Markov chain modeling, simulation, and industry collaboration, we demonstrate up to 29% cost savings with state-dependent policies over static base-stock models. Sensitivity analysis confirms the robustness of these strategies. pdfAgent-Based Model of Dynamics between Objective and Perceived Quality of Healthcare System Jungwoo Kim (KAIST), Moo Hyuk Lee (Seoul National University College of Medicine), Ji-Su Lee (KAIST), Young Kyung Do (Seoul National University College of Medicine), and Taesik Lee (KAIST) Program Track: Healthcare and Life Sciences Program Tag: Python Abstract AbstractNationwide patient concentration poses a significant burden on healthcare systems, largely due to patients’ perception that metropolitan regions offer superior care quality. To better understand this phenomenon, we present an agent-based model to examine how objective quality (OQ) and perceived quality (PQ) co-evolve in a free-choice healthcare system, using South Korea as a salient case. Four mechanisms — preferential hospital choice, scale effect, quality recognition, and word-of-mouth — form a feedback loop: concentration raises OQ, utilization updates PQ, and perceptions diffuse through the population. We identify three emergent phenomena — local dominance, global dominance, and asymmetric quality recognition — and interpret how each contributes to patient outmigration. Building on these insights, we further explore strategies such as “local tiering” and “information provision.” This model-based approach deepens understanding of OQ–PQ dynamics, and offers insights for addressing nationwide healthcare utilization in various contexts. pdfAn Empirical Study of Generative Models as Input Models for Simulation Zhou Miao (The Hong Kong Polytechnic University) and Zhiyuan Huang and Zhaolin Hu (Tongji University) Program Track: Simulation and Artificial Intelligence Program Tags: Input Modeling, Python Abstract AbstractInput modeling is pivotal for generating realistic data that mirrors real-world variables for simulation, yet traditional parametric methods often fail to capture complex dependencies. This study investigates the efficacy of modern generative models—such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Normalizing Flows, and Diffusion Models—in addressing these challenges. Through systematic experiments on synthetic and real-world datasets, we evaluate their performance using metrics like Wasserstein distance and quantile loss. Our findings reveal that VAEs and Denoising Diffusion Probabilistic Models (DDPMs) consistently outperform other models, particularly in capturing nonlinear relationships, while GAN-based approaches exhibit instability. These results provide practical insights for practitioners, highlighting models that deliver reliable performance without extensive customization, and outline promising directions for future research in simulation input modeling. pdfAutomated Detection in Unstructured Material Streams: Challenges and Solutions Alexander Schlosser (Friedrich-Alexander-Universität Erlangen Nürnberg); Takeru Nemoto (SIEMENS AG); and Jonas Walter, Sebastian Amon, Joerg Franke, and Sebastian Reitelshöfer (Friedrich-Alexander-Universität Erlangen Nürnberg) Program Track: Simulation as Digital Twin Program Tag: Python Abstract AbstractPost-consumer packaging waste continues to rise, intensifying the need for automated sorting to increase recycling efficiency. Lightweight packaging (LWP), with its variable geometries, materials, and occlusions, remains especially difficult for conventional vision systems. Integrating You Only Look Once (YOLO) instance segmentation into robotic simulation platforms enables robust real-time detection. Experiments show that synthetic datasets yield consistently high segmentation accuracy, whereas real-world performance fluctuates. Crucially, increasing model size or resolution does not guarantee improvement; task-specific tuning and system-level integration are more effective. Simulation frameworks combining Unity, Robot Operating System 2 (ROS2), and MoveIt2 provide realistic evaluation and optimization. These findings demonstrate that AI-based segmentation and digital twins can deliver scalable, adaptive, and self-optimizing sorting systems, offering a practical pathway to sustainable material recovery and circular economy implementation. pdfAutomating Traffic Microsimulation from Synchro UTDF to SOMO Xiangyong Luo (ORNL); Yiran Zhang (University of Washington); Guanhao Xu, Wan Li, and Chieh Ross Wang (Oak Ridge National Laboratory); and Xuesong Simon Zhou (Arizona State University) Program Track: Reliability Modeling and Simulation Program Tags: DOE, Python Abstract AbstractModern transportation research relies on seamlessly integrating traffic signal data with robust network representation and simulation tools. This study presents utdf2gmns, an open-source Python tool that automates conversion of the Universal Traffic Data Format, including network representation, signalized intersections, and turning volumes into the General Modeling Network Specification (GMNS) Standard. The resulting GMNS-compliant network can be converted for microsimulation in SUMO. By automatically extracting intersection control parameters and aligning them with GMNS conventions, utdf2gmns minimizes manual preprocessing and data loss. utdf2gmns also integrates with the Sigma-X engine to extract and visualize key traffic control metrics, such as phasing diagrams, turning volumes, volume-to-capacity ratios, and control delays. This streamlined workflow enables efficient scenario testing, accurate model building, and consistent data management. Validated through case studies, utdf2gmns reliably models complex urban corridors, promoting reproducibility and standardization. Documentation is available on GitHub and PyPI, supporting easy integration and community engagement. pdfBeyond Co-authorship: Discovering Novel Collaborators With Multilayer Random-Walk-Based Simulation in Academic Networks Best Contributed Theoretical Paper - Finalist Siyu Chen, Keng Hou Leong, and Jiadong Liu (Tsinghua University); Wei Chen (Tsinghua University, Tencent Technology (Shenzhen) Co. LTD.); and Wai Kin Chan (Tsinghua University) Program Track: Data Science and Simulation Program Tags: Data Analytics, Python Abstract AbstractAcademic collaboration is vital for enhancing research impact and interdisciplinary exploration, yet finding suitable collaborators remains challenging. Conventional single-layer random walk methods often struggle with the heterogeneity of academic networks and limited recommendation novelty. To overcome these limitations, we propose a novel Multilayer Random Walk simulation framework (MLRW) that simulates scholarly interactions across cooperation, institutional affiliation, and conference attendance, enabling inter-layer transitions to capture multifaceted scholarly relationships. Tested on the large-scale SciSciNet dataset, our MLRW simulation framework significantly outperforms conventional random walk methods in accuracy and novelty, successfully identifying potential collaborators beyond immediate co-authorship. Our analysis further confirms the significance of institutional affiliation as a collaborative predictor, validating its inclusion. This research contributes a more comprehensive simulation approach to scholar recommendations, enhancing the discovery of latent practical collaborations. Future research will focus on integrating additional interaction dimensions and optimizing weighting strategies to further improve diversity and relevance. pdfBridging the Gap: A Practical Guide to Implementing Deep Reinforcement Learning Simulation in Operations Research with Gymnasium Konstantinos Ziliaskopoulos, Alexander Vinel, and Alice E. Smith (Auburn University) Program Track: Introductory Tutorials Program Tags: Neural Networks, Python, Supply Chain Abstract AbstractDeep Reinforcement Learning (DRL) has shown considerable promise in addressing complex sequential decision-making tasks across various fields, yet its integration within Operations Research (OR) remains limited despite clear methodological compatibility. This paper serves as a practical tutorial aimed at bridging this gap, specifically guiding simulation practitioners and researchers through the process of developing DRL environments using Python and the Gymnasium library. We outline the alignment between traditional simulation model components, such as state and action spaces, objective functions, and constraints, and their DRL counterparts. Using an inventory control scenario as an illustrative example, which is also available online through our GitHub repository, we detail the steps involved in designing, implementing, and integrating custom DRL environments with contemporary DRL algorithms. pdfClassical and AI-based Explainability of Ontologies on the Example of the Digital Reference – the Semantic Web for Semiconductor and Supply Chains Containing Semiconductors Marta Bonik (Infineon Technologies AG), Eleni Tsaousi (Harokopio University of Athens), Hans Ehm (Infineon Technologies AG), and George Dimitrakopoulos (Harokopio University of Athens) Program Track: MASM: Semiconductor Manufacturing Program Tags: Conceptual Modeling, Python, Supply Chain Abstract AbstractOntologies are essential for structuring knowledge in complex domains like semiconductor supply chains but often remain inaccessible to non-technical users. This paper introduces a combined classical and AI-based approach to improve ontology explainability, using Digital Reference (DR) as a case study. The first approach leverages classical ontology visualization tools, enabling interactive access and feedback for user engagement. The second integrates Neo4j graph databases and Python with a large language model (LLM)-based architecture, facilitating natural language querying of ontologies. A post-processing layer ensures reliable and accurate responses through query syntax validation, ontology schema verification, fallback templates, and entity filtering. The approach is evaluated with natural language queries, demonstrating enhanced usability, robustness, and adaptability. By bridging the gap between traditional query methods and AI-driven interfaces, this work promotes the broader adoption of ontology-driven systems in the Semantic Web and industrial applications, including semiconductor supply chains. pdfDigital Twin to Mitigate Adverse Addictive Gambling Behavior Felisa Vazquez-Abad and Jason Young (Hunter College CUNY) and Silvano A Bernabel (Graduate Center CUNY) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis work develops a simulation engine to create a digital twin that will monitor a gambler’s betting behavior when playing games. The digital twin is designed to perform simulations to compare outcomes of different betting strategies, under various assumptions on the psychological profile of the player. With these simulations it then produces recommendations to the player aimed at mitigating adverse outcomes. Our work focuses on efficient simulation and the creation of the corresponding GUI that will become the interface between the player and the digital twin. pdfDiscrete Event Simulation for Assessing the Impact of Bus Fleet Electrification on Service Reliability Best Contributed Applied Paper - Finalist Minjie Xia, Wenying Ji, and Jie Xu (George Mason University) Program Track: Project Management and Construction Program Tags: Complex Systems, Data Driven, Python, Resiliency Abstract AbstractThis paper aims to derive a simulation model to evaluate the impact of bus fleet electrification on service reliability. At its core, the model features a micro discrete event simulation (DES) of an urban bus network, integrating a route-level bus operation module and a stop-level passenger travel behavior module. Key reliability indicators—bus headway deviation ratio, excess passenger waiting time, and abandonment rate—are computed to assess how varying levels of electrification influence service reliability. A case study of route 35 operated by DASH in Alexandria, VA, USA is conducted to demonstrate the applicability and interpretability of the developed DES model. The results reveal trade-offs between bus fleet electrification and service reliability, highlighting the role of operational constraints and characteristics of electric buses (EBs). This research provides transit agencies with a data-driven tool for evaluating electrification strategies while maintaining reliable and passenger-centered service. pdfDistributionally Robust Logistic Regression with Missing Data Weicong Chen and Hoda Bidkhori (George Mason University) Program Track: Uncertainty Quantification and Robust Simulation Program Tags: Data Driven, Python Abstract AbstractMissing data presents a persistent challenge in machine learning. Conventional approaches often rely on data imputation followed by standard learning procedures, typically overlooking the uncertainty introduced by the imputation process. This paper introduces Imputation-based Distributionally Robust Logistic Regression (I-DRLR)—a novel framework that integrates data imputation with class-conditional Distributionally Robust Optimization (DRO) under the Wasserstein distance. I-DRLR explicitly models distributional ambiguity in the imputed data and seeks to minimize the worst-case logistic loss over the resulting uncertainty set. We derive a convex reformulation to enable tractable optimization and evaluate the method on the Breast Cancer and Heart Disease datasets from the UCI Repository. Experimental results demonstrate consistent improvements for out-of-sample performance in both prediction accuracy and ROC-AUC, outperforming traditional methods that treat imputed data as fully reliable. pdfEVIMAS - Digital Twin-Based Electric Vehicle Infrastructure Modeling And Analytics System Aparna Kishore, Kazi Ashik Islam, and Madhav Marathe (University of Virginia, Biocomplexity Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Python Abstract AbstractThe growing shift to electric vehicles (EVs) presents significant challenges due to the complexities in spatial, temporal, and behavioral aspects of adoption and infrastructure development. To address these challenges, we present the EV Infrastructure Modeling and Analytics System (EVIMAS), a modular and extensible software system built using microservices principles. The system comprises three loosely coupled components: (i) a data processing pipeline that constructs a static digital model using diverse inputs, (ii) a modeling and simulation pipeline for simulating dynamic, multi-layered interactions, and (iii) an analytics pipeline that supports task execution and the analysis of results. We demonstrate the utility of the EVIMAS via three case studies. Our studies show that such analysis can be done efficiently under varying constraints and objectives, including geographic regions, analytical goals, and input configurations. EVIMAS supports fine-grained, agent-based EV simulations, facilitating the integration of new components, data, and models for EV infrastructure development. pdfEcho Warfare: The Strategic Appropriation of Intangible Heritage Sara Salem AlNabet (Multidimensional Warfare Training Center (MDIWTC).) Program Track: Military and National Security Applications Program Tag: Python Abstract AbstractThis paper introduces Echo Warfare, a novel non-kinetic doctrine that strategically targets intangible cultural heritage through systematic replication, appropriation, and institutional rebranding. Unlike traditional psychological or cognitive warfare, Echo Warfare aims for the intergenerational erosion of cultural identity and memory infrastructure. While current international legal frameworks inadequately address such systematic cultural appropriation, this paper advances discourse by implementing a Dynamic Bayesian Network (DBN) simulation to model how cultural anchors degrade under sustained echo feedback. The simulation reveals threshold effects, legitimacy dynamics, and population-level vulnerabilities, offering a predictive modeling tool for identifying intervention thresholds and patterns of vulnerability. The study concludes with legal, educational, and simulation-based policy recommendations to mitigate the long-term effects of Echo Warfare. pdfExpert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection David Thomas Farr (University of Washington), Kevin Talty (United States Army), Alexandra Farr (Microsoft), John Stockdale (U.S. Army), Iain Cruickshank (Carnegie Mellon University), and Jevin West (University of Washington) Program Track: Military and National Security Applications Program Tags: Cybersecurity, Python Abstract AbstractAs cyber threats become more sophisticated, rapid and accurate vulnerability detection is essential for maintaining secure systems. This study explores the use of Large Language Models in software vulnerability assessment by simulating the identification of Python code with known Common Weakness Enumerations (CWEs), comparing zero-shot, few-shot cross-domain, and few-shot in-domain prompting strategies. Our results indicate that few-shot prompting significantly enhances classification performance, particularly when integrated with confidence-based routing strategies that improve efficiency by directing human experts to cases where model uncertainty is high.
We find that LLMs can effectively generalize across vulnerability categories with minimal examples, suggesting their potential as scalable, adaptable cybersecurity tools in simulated environments. By integrating AI-driven approaches with expert-in-the-loop (EITL) decision-making, this work highlights a pathway toward more efficient and responsive cybersecurity workflows. Our findings provide a foundation for deploying AI-assisted vulnerability detection systems that enhance resilience while reducing the burden on human analysts. pdfFirescore: a Framework for Incident Risk Evaluation, Simulation, Coverage Optimization and Relocation Experiments Guido A.G. Legemaate (Fire Department Amsterdam-Amstelland, Safety Region Amsterdam-Amstelland); Joep van den Bogaert (Jheronimus Academy of Data Science,); Rob D. van der Mei (Centrum Wiskunde & Informatica); and Sandjai Bhulai (Vrije Universiteit Amsterdam) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Python Abstract AbstractThis paper introduces fireSCore, an open source framework for incident risk evaluation, simulation, coverage optimization, and relocation experiments. As a digital twin of operational fire department logistics, its visualization frontend provides a live view of current coverage for the most common fire department units. Manually changing a unit status allows for a view into future coverage as it triggers an immediate recalculation of prognosed response times and coverage using the Open Source Routing Machine. The backend provides the controller and model, and implements various algorithms, e.g. a relocation algorithm that optimizes coverage during major incidents. The data broker handles communication with data sources and provides data for the front- and backend. An optional simulator adds an environment in which various scenarios, models and algorithms can be tested and aims to drive current and future organizational developments within the Dutch national fire service. pdfGNN-Heatmap Augmented Monte Carlo Tree Search for Cloud Workflow Scheduling Best Contributed Applied Paper - Finalist Dingyu Zhou, Jiaqi Huang, Yirui Zhang, and Wai Kin (Victor) Chan (Tsinghua University) Program Track: Simulation and Artificial Intelligence Program Tags: Monte Carlo, Neural Networks, Python Abstract AbstractThis paper addresses the NP-hard cloud workflow scheduling problem by proposing a novel method that integrates Graph Neural Networks with Monte Carlo Tree Search (MCTS). Cloud workflows, represented as Directed Acyclic Graphs, present significant scheduling challenges due to complex task dependencies and heterogeneous resource requirements. Our method leverages Anisotropic Graph Neural Networks to extract structural features from workflow and create a heatmap that guides the MCTS process during both the selection and simulation phases. Extensive experiments on workflows ranging from 30 to 110 tasks demonstrate that our method outperforms rule-based algorithms, classic MCTS, and other learning-based approaches; more notably, it achieves near-optimal solutions with only a 2.56% gap from exact solutions and demonstrates exceptional scalability to completely unseen workflow sizes. This synergistic integration of neural network patterns with Monte Carlo simulation-based search not only advances cloud workflow scheduling but also offers valuable insights for simulation-based optimization across diverse domains. pdfGenVision: Enhancing Construction Safety Monitoring with Synthetic Image Generation Jiuyi Xu (Colorado School of Mines), Meida Chen (USC Institute for Creative Technologies), and Yangming Shi (Colorado School of Mines) Program Track: Project Management and Construction Program Tags: Neural Networks, Python Abstract AbstractThe development of object detection models for construction safety is often limited by the availability of high-quality, annotated datasets. This study explores the use of synthetic images generated by DALL·E 3 to supplement or partially replace real data in training YOLOv8 for detecting construction-related objects. We compare three dataset configurations: real-only, synthetic-only, and a mixed set of real and synthetic images. Experimental results show that the mixed dataset consistently outperforms the other two across all evaluation metrics, including precision, recall, IoU, and mAP@0.5. Notably, detection performance for occluded or ambiguous objects such as safety helmets and vests improves with synthetic data augmentation. While the synthetic-only model shows reasonable accuracy, domain differences limit its effectiveness when used alone. These findings suggest that high-quality synthetic data can reduce reliance on real-world data and enhance model generalization, offering a scalable approach for improving construction site safety monitoring systems. pdfGenerating Artificial Electricity Data For Monitoring Energy Consumption In Smart Cities Sina Pahlavan, Wael Shabana, and Abdolreza Abhari (Toronto Metropolitan University) Program Track: Data Science and Simulation Program Tag: Python Abstract AbstractGeneration of synthetic data for energy demand allows simulation-based forecasting for infrastructure
planning, building optimization, and energy management, key elements of smart cities. This study compares
multivariate kernel density estimation (KDE) and time-series generative adversarial networks (TimeGAN)
for their ability to generate realistic time series that preserve crucial feature relationships for forecasting.
The evaluation is based on both statistical similarity and predictive performance using machine learning
models, focusing on seasonal and hourly consumption patterns. The results emphasize the importance of
temporal consistency and justify synthetic augmentation when real data is limited, especially for time-aware
energy forecasting tasks, and demonstrate how synthesized data can be used when forecasting future energy
demand. pdfGraph-Based Reinforcement Learning for Dynamic Photolithography Scheduling Sang-Hyun Cho, Sohyun Jeong, and Jimin Park (korea advanced institute of science and technology); Boyoon Choi and Paul Han (Samsung Display); and Hyun-Jung Kim (korea advanced institute of science and technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Python Abstract AbstractThis paper addresses the photolithography process scheduling problem, a critical bottleneck in both display and semiconductor production. In display manufacturing, as the number of deposited layers increases and reentrant operations become more frequent, the complexity of scheduling processes has significantly increased. Additionally, growing market demand for diverse product types underscores the critical need for efficient scheduling to enhance operational efficiency and meet due dates. To address these challenges, we propose a novel graph-based reinforcement learning framework that dynamically schedules photolithography operations in real time, explicitly considering mask locations, machine statuses, and associated transfer times. Through numerical experiments, we demonstrate that our method achieves consistent and robust performance across various scenarios, making it a practical solution for real-world manufacturing systems. pdfHierarchical Population Synthesis Using a Neural-Differentiable Programming Approach Imran Mahmood Q. Hashmi, Anisoara Calinescu, and Michael Wooldridge (University of Oxford) Program Track: Agent-based Simulation Program Tags: Complex Systems, Neural Networks, Open Source, Python Abstract AbstractAdvances in Artificial Intelligence have enabled more accurate and scalable modelling of complex social systems, which depend on realistic high-resolution population data. We introduce a novel methodology for generating hierarchical synthetic populations using differentiable programming, producing detailed demographic structures essential for simulation and analysis. Existing approaches struggle to model hierarchical population structures and optimise over discrete demographic attributes. Leveraging feed-forward neural networks and Gumbel-Softmax encoding, our approach transforms aggregated census and survey data into continuous, differentiable forms, enabling gradient-based optimisation to match target demographics with high fidelity. The framework captures multi-scale population structures, including household composition and socio-economic diversity, with verification via logical rules and validation against census cross tables. A UK case study shows our model closely replicates real-world distributions. This scalable approach provides simulation modellers and analysts with, high-fidelity synthetic populations as input for agent-based simulations of complex societal systems, enabling behaviour simulation, intervention evaluation, and demographic analysis. pdfInfluence of Norms in Alliance Characteristics of Humanitarian Food Agencies: Capability, Compatibility and Satisfaction Naimur Rahman Chowdhury and Rashik Intisar Siddiquee (North Carolina State University) and Julie Simmons Ivy (University of Michigan) Program Track: Environment, Sustainability, and Resilience Program Tags: Conceptual Modeling, Python, Resiliency Abstract AbstractHunger relief networks consist of agencies that work as independent partners within a food bank network. For these networks to effectively and efficiently reduce food insecurity, strategic alliances between agencies are crucial. Agency preference for forming alliances with other agencies can impact network structure and network satisfaction. In this paper, we explore the compatibility and satisfaction achieved by alliances between different agencies. We introduce two agency norms: conservative and diversifying. We develop an agent-based simulation model to investigate alliance formation in a network. We evaluate network satisfaction, satisfaction among different types of agencies, and alliance heterogeneity. We test the statistical significance of satisfaction within a norm and between norms for different agencies. Findings reveal that the ‘diversifying’ norm in the network reduces gaps in satisfaction between strong and weak agencies, ensuring fairness for weaker agencies in the network, whereas the ‘conservative’ norm favors moderate agencies in the network. pdfIntegrating Expert Trustworthiness into Digital Twin Models Extracted from Expert Knowledge and Internet of Things Data: A Case Study in Reliability Michelle Jungmann (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Process Mining, Python Abstract AbstractThe extraction of Digital Twin models from both expert knowledge and Internet of Things data remains an underexplored area, with existing approaches typically being highly customized. Expert knowledge, provided by human experts, is influenced by individual experience, contextual understanding and domain-specific knowledge, leading to varying levels of uncertainty and trustworthiness. In this paper, we address the identified research gap by extending our previous work and introducing a novel approach that models and integrates expert trustworthiness into the extraction of what we term data-knowledge fused Digital Twin models. Key features of the approach are: quantifications of expert trustworthiness and algorithms for selecting and integrating knowledge into model extractions based on trustworthiness. We demonstrate our approach for quantifying and incorporating trustworthiness levels in a reliability modeling case study. pdfModeling and Simulation of Surgical Procedures with an Application to Laparoscopic Cholycystectomy Yiyu Wang and Vincent Augusto (Ecole des Mines de Saint-Etienne), Canan Pehlivan (IMT Mines Albi), Julia Fleck (Ecole des Mines de Saint-Etienne), and Nesrine Mekhenane (Chaire Bopa) Program Track: Data Science and Simulation Program Tags: Process Mining, Python Abstract AbstractSurgeons’ actions are key to surgical success. Our objective is to develop a decision-support tool to help prioritize patient safety and reduce risks during surgery. We propose a structured mathematical framework that defines key components of a surgical procedure, making it adaptable to various types of surgeries. Using the CholecT50 dataset, we generate and pre-process event logs to construct a process map that models the surgical workflow through Process Mining techniques. This process map provides insights into procedural patterns and can be visualized at different levels of granularity to align with surgeons’ needs. To validate its effectiveness, we simulate synthetic surgeries and assess the process map’s performance in
replicating real surgical workflows. By demonstrating the generalizability of our approach, this work paves the way for the development of an advanced decision-support tool that can assist surgeons in real-time decision-making and post-operative analysis. pdfModular Python Library for Simulations of Semiconductor Assembly and Test Process Equipment Robert Dodge (Arizona State University), Zachary Eyde (Intel Corporation), and Giulia Pedrielli (Arizona State University) Program Track: MASM: Semiconductor Manufacturing Program Tag: Python Abstract AbstractIncreasing global demand has led to calls for better methods of process improvement for semiconductor wafer manufacturing. Of these methods, digital twins have emerged as a natural extension of already existing simulation techniques. We argue that despite their extensive use in literature, the current tools used to construct semiconductor simulations are underdeveloped. Without a standardized tool to build these simulations, their modularity and capacity for growth are heavily limited. In this paper, we propose and implement a library of classes in the Python language designed to build on top of the already existing SimPy library. These classes are designed to automatically handle specific common logical features of semiconductor burn-in processes. This design allows users to easily create modular, adaptable, digital twin-ready simulations. Preliminary results demonstrate the library’s efficacy in predicting against benchmark data provided by the Intel Corporation and encourage further development. pdfMulti-agent Market Simulation for Deep Reinforcement Learning With High-Frequency Historical Order Streams David Byrd (Bowdoin College) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Open Source, Python Abstract AbstractAs artificial intelligence rapidly co-evolves with complex modern systems, new simulation frameworks are needed to explore the potential impacts. In this article, I introduce a novel open source multi-agent financial market simulation powered by raw historical order streams at nanosecond resolution. The simulation is particularly targeted at deep reinforcement learning, but also includes momentum, noise, order book imbalance, and value traders, any number and type of which may simultaneously trade against one another and the historical order stream within the limit order books of the simulated exchange. The simulation includes variable message latency, automatic agent computation delays sampled in real time, and built-in tools for performance logging, statistical analysis, and plotting. I present the simulation features and design, demonstrate the framework on a multipart DeepRL use case with continuous actions and observations, and discuss potential future work. pdfMulti-fidelity Simulation Framework for the Strategic Pooling of Surgical Assets Sean Shao Wei Lam (Singapore Health Services, Duke NUS Medical School); Boon Yew Ang (Singapore Health Services); Marcus Eng Hock Ong (Singapore Health Services, Duke NUS Medical School); and Hiang Khoon Tan (Singapore General Hospital, SingHealth Duke-NUS Academic Medicine Centre) Program Track: Healthcare and Life Sciences Program Tags: Neural Networks, Python Abstract AbstractThis study describes a multi-fidelity simulation framework integrating a high-fidelity discrete event simulation (DES) model with a machine learning (ML)-based low-fidelity model to optimize operating theatre (OT) scheduling in a major public hospital in Singapore. The high-fidelity DES model is trained and validated with real-world data and the low-fidelity model is trained and validated with synthetic data derived from simulation runs with the DES model. The high-fidelity model captures system complexities and uncertainties while the low-fidelity model facilitates policy optimization via the multi-objective non-dominated sorting genetic algorithm (NSGA-II). The optimization algorithm can identify Pareto-optimal policies under varying open access (OA) periods and strategies. Pareto optimal policies are derived across the dual objectives in maximizing OT utilization (OTU) and minimizing waiting time to surgery (WTS). These policies support post-hoc evaluation within an integrated decision support system (DSS). pdfOptimization of Operations in Solid Bulk Port Terminals using Digital Twin JACKELINE DEL CARMEN HUACCHA NEYRA and Lorrany Cristina da Silva (GENOA), João Ferreira Netto (University of Sao Paulo), and Afonso Celso Medina (GENOA) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis article presents the development of a Digital Twin (DT)-based tool for optimizing scheduling in solid bulk export port terminals. The approach integrates agent-based simulation with the Ant Colony System (ACS) metaheuristic to efficiently plan railway unloading, stockyard storage, and maritime shipping. The model interacts with operational data, anticipating issues and aiding decision-making. Validation was performed using real data from a port terminal in Brazil, yielding compatible results and reducing port stay duration. Tests were based on a Baseline Scenario, aligned with a mineral export terminal, for ACS parameter calibration, along with three additional scenarios: direct shipment, preventive maintenance, and a simultaneous route from stockyard to ships. The study highlights DT’s potential to modernize port operations, offering practical support in large-scale logistics environments. pdfPENM: A Parametric Evolutionary Network Model for Scholar Collaboration Network Simulation Jiadong Liu, Keqin Guan, and Siyu Chen (Tsinghua University); Wei Chen (Tsinghua University, Tencent Technology (Shenzhen) Co. LTD.); and Wai Kin Victor Chan (Tsinghua University) Program Track: Data Science and Simulation Program Tag: Python Abstract AbstractIdentifying suitable collaborators has become an important challenge in research management, where insights into the structure and evolution of scholar collaboration networks are essential. However, existing studies often adopt static or locally dynamic views, limiting their ability to capture long-term network evolution. To address these issues, this paper introduces PENM (Parametric Evolutionary Network Model), a simulation framework designed to model the evolution of scholar collaboration networks through parametric mechanisms. The PENM simulates node and edge evolution through probabilistic rules and tunable parameters, reflecting realistic academic behaviors like cumulative collaboration and co-author expansion. We provide a theoretical analysis of the model's growth patterns under varying parameters and verify these findings through simulation. Evaluations on real-world datasets demonstrate that PENM evolves networks with degree distributions closely aligned with actual scholar networks. PENM offers a versatile simulation-based approach for modeling academic collaboration dynamics, enabling applications and simulation of future academic ecosystems. pdfProbabilistic Isochrone Analysis in Military Ground Movement: Multi-Method Synergy for Adaptive Models of the Future Alexander Roman and Oliver Rose (Universität der Bundeswehr München) Program Track: Military and National Security Applications Program Tags: Data Analytics, Python Abstract AbstractTimely and accurate prediction of adversarial unit movements is a critical capability in military operations, yet traditional methods often lack the granularity or adaptability to deal with sparse, uncertain data. This paper presents a probabilistic isochrone (PI) framework to estimate future positions of military units based on sparse reconnaissance reports. The approach constructs a continuous probability density function of movement distances and derives gradient prediction areas. Validation is conducted using real-world data from the 2022 Russian invasion of Ukraine, evaluating both the inclusion of actual future positions within the predicted rings and the root mean-squared error of our method. Results show that the method yields reliable spatial uncertainty bounds and offers interpretable predictive insights. This PI approach complements existing isochrone mapping and adversarial modeling systems and demonstrates a novel fusion of simulation, spatial analytics, and uncertainty quantification in military decision support. Future work will integrate simulation to enhance predictive fidelity. pdfPySIRTEM: An Efficient Modular Simulation Platform For The Analysis Of Pandemic Scenarios Preetom Kumar Biswas, Giulia Pedrielli, and K. Selçuk Candan (Arizona State University) Program Track: Modeling Methodology Program Tags: DOE, Monte Carlo, Python Abstract AbstractConventional population-based ODE models struggle against increased level of resolution since incorporating many states exponentially increases computational costs, and demands robust calibration for numerous hyperparameters. PySIRTEM is a spatiotemporal SEIR-based epidemic simulation platform that provides high resolution analysis of viral disease progression and mitigation. Based on the authors-developed Matlab© simulator SIRTEM, PySIRTEM’s modular design reflects key health processes, including infection, testing, immunity, and hospitalization, enabling flexible manipulation of transition rates. Unlike SIRTEM, PySIRTEM uses a Sequential Monte Carlo (SMC) particle filter to dynamically learn epidemiological parameters using historical COVID-19 data from several U.S. states. The improved accuracy (by orders of magnitude) make PySIRTEM ideal for informed decision-making by detecting outbreaks and fluctuations. We further demonstrate PySIRTEM ’s usability performing a factorial analysis to assess the impact of different hyperparameter configurations on the predicted epidemic dynamics. Finally, we analyze
containment scenarios with varying trends, showcasing PySIRTEM ’s adaptability and effectiveness. pdfSales Planning Using Data Farming in Trading Networks Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) and Alexander Wuttke, Markus Rabe, Hendrik van der Valk, and Mario di Benedetto (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Data Analytics, Python Abstract AbstractVolatile customer demand poses a significant challenge for the logistics networks of trading companies. To mitigate the uncertainty in future customer demand, many products are produced to stock with the goal to be able to meet the customers’ expectations. To adequately manage their product inventory, demand forecasting is a major concern in the companies’ sales planning. A promising approach besides using observational data as an input for the forecasting methods is simulation-based data generation, called data farming. In this paper, purposeful data generation and large-scale experiments are applied to generate input data for predicting customer demand in sales planning of a trading company. An approach is presented for using data farming in combination with established forecasting methods such as random forests. The application is discussed on a real-world use case, highlighting benefits of the chosen approach, and providing useful and value-adding insights to motivate further research. pdfScalable, Rule-Based, Parallel Discrete Event Based Agentic AI Simulations Atanu Barai, Stephan Eidenbenz, and Nandakishore Santhi (Los Alamos National Laboratory) Program Track: Simulation and Artificial Intelligence Program Tags: Parallel, Python Abstract AbstractWe introduce a novel parallel discrete event simulation (PDES)-based method to couple multiple AI and non-AI agents in a rule-based manner with dynamic constrains while ensuring correctness of output. Our coupling mechanism enables the agents to work in a co-operative environment towards a common goal while many sub-tasks run in parallel. AI agents trained on vast amounts of human data naturally model complex human behaviors and emotions easily – this is in contrast to conventional agents which need to be burdensomely complex to capture aspects of human behavior. Distributing smaller AI agents on a large heterogeneous CPU/GPU cluster enables extremely scalable simulation tapping into the collective complexity of individual smaller models, while circumventing local memory bottlenecks for model parameters. We illustrate the potential of our approach with examples from traffic simulation and robot gathering, where we find our
coupling of AI/non-AI agents improves overall fidelity. pdfSimulating the dynamic interaction between fleet performance and maintenance processes based on Remaining Useful Life Christoph Werner (Minitab) Program Track: Reliability Modeling and Simulation Program Tags: Python, SIMUL8 Abstract AbstractFleet planning is often challenging. Especially, the dynamic interaction with maintenance entails various uncertainties. Predicting the arrivals into maintenance processes requires an understanding of fleet performance over time while, in turn, delays of repairs severely impact fleet performance and deterioration.
This feedback loop has been neglected so far, which is why we present a novel framework using a ‘rolling window’ machine learning model to predict the inputs into a discrete event simulation (DES) of repair activities based on remaining useful life (RUL). Our ‘fleet tracker’ then uses the DES outputs to simulate fleet performance together with environmental and mission-based factors which form the inputs for predicting RUL. Finally, explainable ML helps decision-makers construct relevant ‘what if’ scenarios. As a motivating example, we consider helicopters in search and rescue missions and their maintenance. As a key result, we compare two scenarios of repair turnaround times and their impact on RUL decline. pdfSpillover-Aware Simulation Analysis for Policy Evaluation in Epidemic Networks JINGYUAN CHOU, Jiangzhuo Chen, and Madhav Marathe (University of Virginia) Program Track: Simulation and Artificial Intelligence Program Tag: Python Abstract AbstractSimulations are widely used to evaluate public health interventions, yet they often fail to quantify how interventions in one region indirectly affect others—a phenomenon known as spillover. This omission can lead to incorrect policy evaluations and misattributed effects. We propose a post-simulation framework for estimating causal spillover in spatial epidemic networks. Our method introduces a directional graph neural network (Dir-GNN) estimator that learns homophily-aware representations and estimates counterfactual outcomes under hypothetical neighbor treatments. Applied to a semi-synthetic setup built on PatchSim—a metapopulation SEIR simulator with realistic inter-county mobility—our estimator recovers spillover effects and corrects attribution errors inherent in standard evaluation. Experiments show that accounting for spillover improves treatment estimation and policy reliability. pdfSupply Chain Optimization via Generative Simulation and Iterative Decision Policies Haoyue Bai (Arizona State University); Haoyu Wang (NEC Labs America.); Nanxu Gong, Xinyuan Wang, and Wangyang Ying (Arizona State University); Haifeng Chen (NEC Labs America.); and Yanjie Fu (Arizona State University) Program Track: Data Science and Simulation Program Tags: Data Driven, Python, Supply Chain Abstract AbstractHigh responsiveness and economic efficiency are critical objectives in supply chain transportation, both of which are influenced by strategic decisions on shipping mode. An integrated framework combining an efficient simulator with an intelligent decision-making algorithm can provide an observable, low-risk environment for transportation strategy design. An ideal simulation-decision framework must (1) generalize effectively across various settings, (2) reflect fine-grained transportation dynamics, (3) integrate historical experience with predictive insights, and (4) maintain tight integration between simulation feedback and policy refinement. We propose Sim-to-Dec framework to satisfy these requirements. Specifically, Sim-to-Dec consists of a generative simulation module, which leverages autoregressive modeling to simulate continuous state changes, reducing dependence on handcrafted domain-specific rules and enhancing robustness against data fluctuations; and a history–future dual-aware decision model, refined iteratively through end-to-end optimization with simulator interactions. Extensive experiments conducted on three real-world datasets demonstrate that Sim-to-Dec significantly improves timely delivery rates and profit. pdfSupporting Strategic Healthcare Decisions With Simulation: A Digital Twin For Redesigning Traumatology Services Marta Cildoz and Miguel Baigorri (Public University of Navarre), Isabel Rodrigo-Rincón (University Hospital of Navarre), and Fermin Mallor (Public University of Navarre) Program Track: Healthcare and Life Sciences Program Tag: Python Abstract AbstractReducing waiting times in specialized healthcare has become a pressing concern in many countries, particularly in high-demand services such as traumatology. This study introduces a simulation-based approach to support strategic decision-making for redesigning the referral interface between Primary Care and specialized care, as well as reorganizing internal pathways in the Traumatology Service of the University Hospital of Navarre (Spain). A discrete-event simulation model, developed using real patient data and designed to capture the system’s transient behavior from its current state, is employed to evaluate the effects of these changes on key performance indicators such as number of consultations per patient, physician workload, and waiting list reduction. The model also evaluates how different referral behaviors among Primary Care physicians influence system performance. Results demonstrate the model’s capacity to provide evidence-based guidance for strategic healthcare decisions and highlight its potential to evolve into a digital twin for continuous improvement and operational planning. pdfTESO: Tabu‐Enhanced Simulation Optimization for Noisy Black-Box Problems Bulent Soykan, Sean Mondesire, and Ghaith Rabadi (University of Central Florida) Program Track: Simulation Optimization Program Tag: Python Abstract AbstractSimulation optimization (SO) is frequently challenged by noisy evaluations, high computational costs, and complex, multimodal search landscapes. This paper introduces Tabu-Enhanced Simulation Optimization (TESO), a novel metaheuristic framework integrating adaptive search with memory-based strategies. TESO leverages a short-term Tabu List to prevent cycling and encourage diversification, and a long-term Elite Memory to guide intensification by perturbing high-performing solutions. An aspiration criterion allows overriding tabu restrictions for exceptional candidates. This combination facilitates a dynamic balance between exploration and exploitation in stochastic environments. We demonstrate TESO's effectiveness and reliability using an queue optimization problem, showing improved performance compared to benchmarks and validating the contribution of its memory components. pdfTowards Process Optimization by Leveraging Relationships between Electrical Wafer Sorting and Complete-line Statistical Process Control Data Dmitrii Fomin (IMT-Atlantique); Andres Torres (Siemens); Valeria Borodin (IMT-Atlantique); Anastasiia Doinychko (Siemens); David Lemoine (IMT-Atlantique); Agnès Roussy (Mines Saint-Etienne, CNRS, UMR 6158 LIMOS); and Daniele Pagano, Marco Stefano Scroppo, Gabriele Tochino, and Daniele Vinciguerra (STMicroelectronics) Program Track: MASM: Semiconductor Manufacturing Program Tag: Python Abstract AbstractIn semiconductor manufacturing, Statistical Process Control (SPC) ensures that products meet the Electrical Wafer Sort (EWS) tests performed at the end of the manufacturing flow. In this work, we model the EWS tests for several products using inline SPC data from the Front-End-Of-Line (FEOL) to the Back-End-Of-Line (BEOL). SPC data tend to be inherently sparse because measuring all wafers, lots, and products is both costly and can significantly impact the throughput. In contrast, EWS data is densely collected at the die level, offering high granularity. We propose to model the problem as a regression task to uncover interdependencies between SPC and EWS data at the lot level. By applying two learning strategies, mono- and multi-target, we demonstrate empirically that leveraging families of EWS tests enhances model performance. The performance and practical relevance of the approach are validated through numerical experiments on real-world industrial data. pdfVirtual Commissioning of AI Vision Systems for Human-robot Collaboration Using Digital Twins and Deep Learning Urfi Khan (Oakland University), Adnan Khan (Institute of Innovation in Technology and Management), and Ali Ahmad Malik (Oakland University) Program Track: Simulation as Digital Twin Program Tag: Python Abstract AbstractVirtual commissioning is the process of validating the design and control logic of a physical system prior to its physical deployment. Machine vision systems are an integral part of automated systems, particularly in perception-driven tasks; however, the complexity of accurately modeling these systems and their interaction with dynamic environments makes their verification in virtual settings a significant challenge. This paper presents an approach for the virtual commissioning of AI-based vision systems which can be useful to evaluate the safety and reliability of human-robot collaborative cells using virtual cameras before physical deployment. A digital twin of a collaborative workcell was developed in Tecnomatix Process Simulate, including a virtual camera that generated synthetic image data. A deep learning model was trained on this synthetic data and subsequently validated using real-world data from physical cameras in an actual human-robot collaborative environment. pdfiHeap: Generalized heap module with case studies in Marketing and Emergency Room Services Aniruddha Mukherjee and Vernon Rego (Purdue University) Program Track: Modeling Methodology Program Tag: Python Abstract AbstractWe introduce a novel iheap module, a flexible Python library for heap operations. The iheap module introduces a generalized comparator function, making it more flexible and general compared to the heapq that is commonly used. We demonstrate that the iheap module achieves parity in terms of time complexity and memory usage against established standard heap modules in Python. Furthermore, the iheap module provides advanced methods and customization options unavailable in its counterparts, which enable the user to implement the heap operations with greater flexibility and control. We demonstrate the iheap module through two case studies. The first case study focuses on the efficient allocation of funds across marketing campaigns with uncertain returns. The second case study focuses on patient triaging and scheduling in the emergency rooms of hospitals. The iheap module provides a powerful and easy-to-use tool for heap operations commonly used in simulation studies. pdf
Evaluating the Transferability of a Synthetic Population Generation Approach for Public Health Applications Emma Von Hoene (George Mason University); Aanya Gupta (Thomas Jefferson High School for Science and Technology); and Hamdi Kavak, Amira Roess, and Taylor Anderson (George Mason University) Program Track: Data Science and Simulation Program Tags: Data Driven, R Abstract AbstractSimulations are valuable in public health research, with synthetic populations enabling realistic policy analysis. However, methods for generating synthetic populations with domain-specific characteristics remain underexplored. To address this, we previously introduced a population synthesis approach that directly integrates health surveys. This study evaluates its transferability across health outcomes, locations, and timeframes through three case studies. The first generates a Virginia population (2021) with COVID-
19 vaccine intention, comparing results to probabilistic and regression-based approaches. The second synthesizes populations with depression (2021) for Virginia, Tennessee, and New Jersey. The third constructs Virginia populations with smoking behaviors for 2021 and 2022. Results demonstrate the method’s transferability for various health applications, with validation confirming its ability to capture accuracy, statistical relationships, and spatial heterogeneity. These findings enhance population synthesis for public health simulations and offer new datasets with small-area estimates for health outcomes, ultimately supporting public health decision-making. pdfExploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfUsing Adaptive Basis Search Method In Quasi-Regression To Interpret Black-Box Models Ambrose Emmett-Iwaniw and Christiane Lemieux (University of Waterloo) Program Track: Analysis Methodology Program Tags: Monte Carlo, R, Variance Reduction Abstract AbstractQuasi-Regression (QR) is an inference method that approximates a function of interest (e.g., black-box model) for interpretation purposes by a linear combination of orthonormal basis functions of $L^2[0,1]^{d}$. The coefficients are integrals that do not have an analytical solution and therefore must be estimated, using Monte Carlo or Randomized Quasi-Monte Carlo (RQMC). The QR method can be time-consuming if the number of basis functions is large. If the function of interest is sparse, many of these basis functions are irrelevant and could thus be removed, but they need to be correctly identified first. We address this challenge by proposing new adaptive basis search methods based on the RQMC method that adaptively select important basis functions. These methods are shown to be much faster than previously proposed QR methods and are overall more efficient. pdf
Enhanced Upper Confidence Bound Procedure for large-scale Ranking and Selection Song Huang, Guangxin Jiang, and Chenxi Li (Harbin Institute of Technology) and Ying Zhong (University of Electronic Science and Technology of China) Program Track: Modeling Methodology Program Tag: Ranking and Selection Abstract AbstractWith the rapid advancement of computing technology, there has been growing interest in effectively solving large-scale ranking and selection (R&S) problems. In this paper, we propose a new large-scale fixed-budget R&S procedure, namely the enhanced upper confidence bound (EUCB) procedure. The EUCB procedure incorporates variance information into the dynamic allocation of simulation budgets. It selects the alternative with the largest upper confidence bound. We prove that the EUCB procedure has sample optimality; that is, to achieve an asymptotically nonzero probability of correct selection (PCS), the total sample size required grows at the linear order with respect to the number of alternatives. We demonstrate the effectiveness of the EUCB procedure in numerical examples. In addition to achieving sample optimality under the PCS criterion, our numerical experiments also show that the EUCB procedure maintains sample optimality under the expected opportunity cost (EOC) criterion. pdfGeneral-Purpose Ranking and Selection for Stochastic Simulation with Streaming Input Data Jaime Gonzalez-Hodar and Eunhye Song (Georgia Institute of Technology) Program Track: Simulation Optimization Program Tags: Data Driven, Metamodeling, Ranking and Selection Abstract AbstractWe study ranking and selection (R&S) where the simulator’s input models are increasingly more precisely estimated from the streaming data obtained from the system. The goal is to decide when to stop updating the model and return the estimated optimum with a probability of good selection (PGS) guarantee. We extend the general-purpose R&S procedure by Lee and Nelson by integrating a metamodel that represents the input uncertainty effect on the simulation output performance measure. The algorithm stops when the estimated PGS is no less than 1−α accounting for both prediction error in the metamodel and input uncertainty. We then propose an alternative procedure that terminates significantly earlier while still providing the same (approximate) PGS guarantee by allowing the performance measures of inferior solutions to be estimated with lower precision than those of good solutions. Both algorithms can accommodate nonparametric input models and/or performance measures other than the means (e.g., quantiles). pdfRevisiting an Open Question in Ranking and Selection Under Unknown Variances Best Contributed Theoretical Paper - Finalist Jianzhong Du (University of Science and Technology of China), Siyang Gao (City University of Hong Kong), and Ilya O. Ryzhov (University of Maryland) Program Track: Analysis Methodology Program Tag: Ranking and Selection Abstract AbstractExpected improvement (EI) is a common ranking and selection (R&S) method for selecting the optimal system design from a finite set of alternatives. Ryzhov (2016) observed that, under normal sampling distributions with known variances, the limiting budget allocation achieved by EI was closely related to the theoretical optimum. However, when the variances are unknown, the behavior of EI was quite different, giving rise to the question of whether the optimal allocation in this setting was totally distinct from the known-variance case. This research solves that problem with a new analysis that can distinguish between known and unknown variance, unlike previously existing theoretical frameworks. We derive a new optimal budget allocation for this setting, and confirm that the limiting behavior of EI has a similar relationship to this allocation as in the known-variance case. pdfStopping Rules for Sampling in Precision Medicine Mingrui Ding (City University of Hong Kong, Beihang University); Siyang Gao (City University of Hong Kong); and Qiuhong Zhao (Beihang University) Program Track: Healthcare and Life Sciences Program Tag: Ranking and Selection Abstract AbstractPrecision medicine (PM) is an approach that aims to tailor treatments based on patient profiles (patients' biometric characteristics). In PM practice, treatment performance is typically evaluated through simulation models or clinical trials. Although these two methods have differences in their sampling subjects and requirements, both are based on a sequential sampling process and require determining a stopping time for sampling to ensure that, with a prespecified confidence level, the best treatment is correctly identified for each patient profile. In this research, we propose unified stopping rules applicable to both simulation and clinical trial-based PM sampling processes. Specifically, we adapt the generalized likelihood ratio (GLR) test to determine when samples collected are sufficient and calibrate it using mixture martingales with a peeling method. Our stopping rules are theoretically grounded and can be integrated with different types of sampling strategies. Numerical experiments on synthetic problems and a case study demonstrate their effectiveness. pdf
Computing Estimators of a Quantile and Conditional Value-at-Risk Sha Cao, Truong Dang, James M. Calvin, and Marvin K. Nakayama (New Jersey Institute of Technology) Program Track: Analysis Methodology Program Tags: Monte Carlo, Rare Events, Sampling Abstract AbstractWe examine various sorting and selection methods for computing quantile and the conditional value-at-risk, two of the most commonly used risk measures in risk management scenarios. We study the situation where simulation data is already pre-generated, and perform timing experiments on calculating risk measures on the existing datasets. Through numerical experiments, approximate analyses, and existing theoretical results, we find that selection generally outperforms sorting, but which selection strategy runs fastest depends on several factors. pdfDEVS Models for Arctic Major Maritime Disasters Hazel Tura Griffith and Gabriel A. Wainer (Carleton University) Program Track: Modeling Methodology Program Tags: C++, DEVS, Rare Events Abstract AbstractModern modelling and simulation techniques allow us to safely test the policies used to mitigate disasters. We show how the DEVS formalism can be used to ease the modelling process by exploiting its modularity. We show how a policymaker’s existing models of any type can be recreated with DEVS so they may be reused in any new models, decreasing the number of new models that need to be made. We recreate a sequential decision model of an arctic major maritime disaster developed by the Canadian government as a DEVS model to demonstrate the method. The case study shows how DEVS allows policymakers to create models for studying emergency policies with greater ease. This work shows a method that can be used by policymakers, including models of emergency scenarios, and how they can benefit from creating equivalent DEVS models, as well as exploiting the beneficial properties of the DEVS formalism. pdfExtending Social Force Model for the Design and Development of Crowd Control and Evacuation Strategies using Hybrid Simulation Best Contributed Applied Paper - Finalist Aaron LeGrand and Seunghan Lee (Mississippi State University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Emergent Behavior, Rare Events Abstract AbstractEfficient crowd control in public spaces is critical for mitigating threats and ensuring public safety, especially in scenarios where live testing environments are limited. It is important to study crowd behavior following disruptions and strategically allocate law enforcement resources to minimize the impact on civilian populations to improve security systems and public safety. This paper proposes an extended social force model to simulate crowd evacuation behaviors in response to security threats, incorporating the influence and coordination of law enforcement personnel. This research examines evacuation strategies that balance public safety and operational efficiency by extending social force models to account for dynamic law enforcement interventions. The proposed model is validated through physics-based simulations, offering insights into effective and scalable solutions for crowd control at public events. The proposed hybrid simulation model explores the utility of integrating agent-based and physics-based approaches to enhance community resilience through improved planning and resource allocation. pdfWhen Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach Ruoning Zhao and Xinyun Chen (The Chinese University of Hong Kong, Shenzhen) Program Track: Data Science and Simulation Program Tags: Rare Events, Sampling, Variance Reduction Abstract AbstractDriven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods. pdf
Discrete Event Simulation for Assessing the Impact of Bus Fleet Electrification on Service Reliability Best Contributed Applied Paper - Finalist Minjie Xia, Wenying Ji, and Jie Xu (George Mason University) Program Track: Project Management and Construction Program Tags: Complex Systems, Data Driven, Python, Resiliency Abstract AbstractThis paper aims to derive a simulation model to evaluate the impact of bus fleet electrification on service reliability. At its core, the model features a micro discrete event simulation (DES) of an urban bus network, integrating a route-level bus operation module and a stop-level passenger travel behavior module. Key reliability indicators—bus headway deviation ratio, excess passenger waiting time, and abandonment rate—are computed to assess how varying levels of electrification influence service reliability. A case study of route 35 operated by DASH in Alexandria, VA, USA is conducted to demonstrate the applicability and interpretability of the developed DES model. The results reveal trade-offs between bus fleet electrification and service reliability, highlighting the role of operational constraints and characteristics of electric buses (EBs). This research provides transit agencies with a data-driven tool for evaluating electrification strategies while maintaining reliable and passenger-centered service. pdfInfluence of Norms in Alliance Characteristics of Humanitarian Food Agencies: Capability, Compatibility and Satisfaction Naimur Rahman Chowdhury and Rashik Intisar Siddiquee (North Carolina State University) and Julie Simmons Ivy (University of Michigan) Program Track: Environment, Sustainability, and Resilience Program Tags: Conceptual Modeling, Python, Resiliency Abstract AbstractHunger relief networks consist of agencies that work as independent partners within a food bank network. For these networks to effectively and efficiently reduce food insecurity, strategic alliances between agencies are crucial. Agency preference for forming alliances with other agencies can impact network structure and network satisfaction. In this paper, we explore the compatibility and satisfaction achieved by alliances between different agencies. We introduce two agency norms: conservative and diversifying. We develop an agent-based simulation model to investigate alliance formation in a network. We evaluate network satisfaction, satisfaction among different types of agencies, and alliance heterogeneity. We test the statistical significance of satisfaction within a norm and between norms for different agencies. Findings reveal that the ‘diversifying’ norm in the network reduces gaps in satisfaction between strong and weak agencies, ensuring fairness for weaker agencies in the network, whereas the ‘conservative’ norm favors moderate agencies in the network. pdfOptimizing Production Planning and Control: Reward Function Design in Reinforcement Learning Marc Wegmann, Benedikt Gruenhag, Michael Zaeh, and Christina Reuter (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tag: Resiliency Abstract AbstractProduction planning and control (PPC) is challenged by the complex and volatile environment manufacturers face. One promising approach in PPC is the application of Reinforcement Learning (RL). In RL, an intelligent agent is trained in a simulation environment based on its experiences. The behavior of the agent is trained by defining a reward function that provides positive feedback if the agent performs well and negative feedback if it does not. Accordingly, the design of the reward function determines the impact RL can have. This article deals with the challenge of how to design a suitable reward function. To do so, 8 design principles and 21 design parameters were identified based on a structured literature review. The principles and parameters were utilized to systematically derive reward function alternatives for a given PPC task. These alternatives were applied to a use case in rough production scheduling being a sub task of PPC. pdf
A New Stochastic Approximation Method for Gradient-based Simulated Parameter Estimation Zehao Li and Yijie Peng (Peking University) Program Track: Simulation Optimization Program Tags: Monte Carlo, Sampling Abstract AbstractThis paper tackles the challenge of parameter calibration in stochastic models, particularly in scenarios where the likelihood function is unavailable in an analytical form. We introduce a gradient-based simulated parameter estimation framework, which employs a multi-time scale stochastic approximation algorithm. This approach effectively addresses the ratio bias that arises in both maximum likelihood estimation and posterior density estimation problems. The proposed algorithm enhances estimation accuracy and significantly reduces computational costs, as demonstrated through extensive numerical experiments. Our work extends the GSPE framework to handle complex models such as hidden Markov models and variational inference-based problems, offering a robust solution for parameter estimation in challenging stochastic environments. pdfComputing Estimators of a Quantile and Conditional Value-at-Risk Sha Cao, Truong Dang, James M. Calvin, and Marvin K. Nakayama (New Jersey Institute of Technology) Program Track: Analysis Methodology Program Tags: Monte Carlo, Rare Events, Sampling Abstract AbstractWe examine various sorting and selection methods for computing quantile and the conditional value-at-risk, two of the most commonly used risk measures in risk management scenarios. We study the situation where simulation data is already pre-generated, and perform timing experiments on calculating risk measures on the existing datasets. Through numerical experiments, approximate analyses, and existing theoretical results, we find that selection generally outperforms sorting, but which selection strategy runs fastest depends on several factors. pdfNested Denoising Diffusion Sampling for Global Optimization Yuhao Wang (Georgia Institute of Technology), Haowei Wang (National University of Singapore), Enlu Zhou (Georgia Institute of Technology), and Szu Hui Ng (National University of Singapore) Program Track: Simulation Optimization Program Tags: Data Driven, Sampling Abstract AbstractWe propose a novel algorithm, Nested Denoising Diffusion Sampling (NDDS), for solving deterministic global optimization problems where the objective function is a black box—unknown, possibly non-differentiable, and expensive to evaluate. NDDS addresses this challenge by leveraging conditional diffusion models to efficiently approximate the evolving solution distribution without incurring the cost of extensive function evaluations. Unlike existing diffusion-based optimization methods that operate in offline settings and rely on manually specified conditioning variables, NDDS systematically generates these conditioning variables through a statistically principled mechanism. In addition, we introduce a data reweighting strategy to address the distribution mismatch between the training data and the target sampling distribution. Numerical experiments demonstrate that NDDS consistently outperforms the Extended Cross-Entropy (CE) method under the same function evaluation budget, particularly in high-dimensional settings. pdfSample Efficient Exploration Policy for Asynchronous Q-Learning Xinbo Shi (Peking University), Jing Dong (Columbia University), and Yijie Peng (Peking University) Program Track: Simulation and Artificial Intelligence Program Tag: Sampling Abstract AbstractThis paper investigates the sample efficient exploration policy for asynchronous Q-learning from the perspective of uncertainty quantification. Although algorithms like $\epsilon$-greedy can balance exploration and exploitation, their performances heavily depend on hyperparameter selection, and a systematic approach to designing exploration policies remains an open question. Inspired by contextual Ranking and Selection problems, we focus on optimizing the probability of correctly selecting optimal actions (PCS) rather than merely estimating Q-values accurately. We establish a novel central limit theorem for asynchronous Q-iterations, enabling the development of two strategies: (1) an optimization-based policy that seeks an optimal computing budget allocation and (2) a parameter-based policy that selects from a parametrized family of policies. Specifically, we propose minimizing an asymptotic proxy of Q-value uncertainty with regularization. Experimental results on benchmark problems, including River Swim and Machine Replacement, demonstrate that the proposed policies can effectively identify sample-efficient exploration strategies. pdfWhen Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach Ruoning Zhao and Xinyun Chen (The Chinese University of Hong Kong, Shenzhen) Program Track: Data Science and Simulation Program Tags: Rare Events, Sampling, Variance Reduction Abstract AbstractDriven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods. pdf
Siemens Tecnomatix Plant Simulation Digital Twins for Optimizing the Transition from Job-shop to Mass Production: Insights from Marine Pump Manufacturing in Scandinavia Sebastian Pihl (University of Exeter, FRAMO); Ahmad Attar and Martino Luis (University of Exeter); and Øystein Haugen (FRAMO) Program Track: Simulation as Digital Twin Program Tag: Siemens Tecnomatix Plant Simulation Abstract AbstractMarine pumping systems are among the most essential equipment for maritime operations. Typically, this type of equipment is manufactured to order in small quantities, thereby increasing the cost and time to market. The rising demand for this equipment has made the transition to mass production even more attractive for key players, which can potentially bring about a significantly high competency for these companies. However, to maintain the competitive advantages of such a transition, it is crucial to make optimal decisions, taking into account all influential aspects. This study, assisted by experts from pioneering companies in this industry, proposes an integrated approach that applies redundancy analysis, inventory policy calibration, and GA-based optimization to address these challenges—all built upon a DES-based digital twin. Applying our framework to the studied case drastically reduced the cycle time from more
than a week to about one day, raising the annual capacity over the projected demand. pdfHybrid Simulation-based Algorithm Tuning for Production Speed Management System as a Stand-alone Online Digital Twin Ahmad Attar, Martino Luis, and Tzu-Chun Lin (University of Exeter); Shuya Zhong (University of Bath); and Voicu Ion Sucala and Abdulaziz Alageel (University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: DOE, Siemens Tecnomatix Plant Simulation Abstract AbstractOne of the primary in-built components of smart, continuous manufacturing lines is the production speed management system (PSMS). In addition to being overly cautious, the decisions made in these systems may center on making local adjustments to the manufacturing process, indicating a major drawback of such systems that prevents them from acting as proper digital twins. This study delves into hybridizing the continuous and discrete event simulation, DOE, and V-graph methods to redefine PSMS’s internal decision algorithms and procedures, giving it an aerial perspective of the line and turning it into a stand-alone online digital twin with decisions at a system level. The proposed approach is applied to a practical case from the food and beverage industry to validate its effectiveness. Numerical results demonstrated an intelligent, dynamic balancing of the production line, a substantial increment in productivity, and up to 37.7% better resiliency against new failure and repair patterns. pdfSimulation-based Production Planning For An Electronic Manufacturing Service Provider Using Collaborative Planning Gabriel Thurow, Benjamin Rolf, Tobias Reggelin, and Sebastian Lang (Otto-von-Guericke-University Magdeburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: Siemens Tecnomatix Plant Simulation, Supply Chain Abstract AbstractThe growing trend of specialization is significantly increasing the importance of Electronic Manufacturing Services (EMS) providers. Typically, EMS companies operate within global supply networks characterized by high complexity and dynamic interactions between multiple stakeholders. As a consequence, EMS providers frequently experience volatile and opaque procurement and production planning processes. This paper investigates the potential of collaborative planning between EMS providers and their customers to address these challenges. Using discrete-event simulation, we compare traditional isolated planning approaches with collaborative planning strategies. Based on empirical data from an EMS company, our findings highlight the benefits of collaborative planning, particularly in improving inventory management and service levels for EMS providers. We conclude by presenting recommendations for practical implementation of collaborative planning in the EMS industry. pdf
A Synergistic Approach to Workforce Optimization in Airport Screening using Machine Learning and Discrete-Event Simulation Lauren A. Cravy and Eduardo Perez (Texas State University) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: Simio Abstract AbstractThis study explores the integration of machine learning (ML) clustering techniques into a simulation-optimization framework aimed at enhancing the efficiency of airport security checkpoints. Simulation-optimization is particularly suited for addressing problems characterized by evolving data uncertainties, necessitating critical system decisions before the complete data stream is observed. This scenario is prevalent in airport security, where passenger arrival times are unpredictable, and resource allocation must be planned in advance. Despite its suitability, simulation-optimization is computationally intensive, limiting its practicality for real-time decision-making. This research hypothesizes that incorporating ML clustering techniques into the simulation-optimization framework can significantly reduce computational time. A comprehensive computational study is conducted to evaluate the performance of various ML clustering techniques, identifying the OPTICS method as the best found approach. By incorporating ML clustering methods, specifically the OPTICS technique, the framework significantly reduces computational time while maintaining high-quality solutions for resource allocation. pdfOptimizing Emergency Department Throughput: A Discrete Event Simulation Study to Mitigate the Impact of Imminent Patient Volume Increases at Standalone Emergency Department Liam Coen (Northwell Health); Peter Woods (Northwell Greenwich Village Hospital, Northwell Health); Rachel Bruce (Northwell Greenwich Village Hospital); Gillian Glenn (Northwell Greenwich Village Hospital, Northwell Health); and Shaghayegh Norouzzadeh (Northwell Health) Program Track: Healthcare and Life Sciences Program Tag: Simio Abstract AbstractThis study optimized resource allocation in a standalone Emergency Department projected to experience a 10-30% patient volume increase. Combining data analysis, interviews, and process mapping, a Discrete Event Simulation model was created in Simio, replicating patient flow. The model revealed the ED could manage a 20% volume surge with minor staffing adjustments while maintaining current resources. At 20% increased volume, key metrics such as door-to-provider and treat-and-release times increased to 18 and 200 minutes, surpassing 2023 results by 38% and 12%, respectively. However, exceeding 20% led to an 87% utilization rate for nighttime nurses, creating a potential bottleneck. Minor staffing adjustments mitigated increased treat-and-release times under moderate volume surges, and the site used simulation optimization results to add an 8-hour shift of provider support in the Sunday nighttime hours. This framework offers valuable insights for other EDs anticipating similar challenges, enabling proactive resource management and process optimization. pdf
Discrete Event Simulation for Sustainable Hospital Pharmacy: The Case of Aseptic Service Unit Fatemeh Alidoost, Navonil Mustafee, Thomas Monks, and Alison Harper (University of Exeter) Program Track: Healthcare and Life Sciences Program Tag: SIMUL8 Abstract AbstractWithin hospital pharmacies, aseptic units preparing high-risk injectable medicines face environmental and economic challenges due to resource-intensive processes and emissions. Variability in patient dosage requirements leads to inefficient drug vial usage, resulting in waste generation, carbon emissions generation from waste, and increased costs. Batching could be used to reduce resource consumption and reduce waste associated with single-dose preparation. This study develops a discrete event simulation, as a tool for strategy evaluation and experimentation, to assess the impact of batching on productivity and sustainability. The model captures key process dynamics, including prescriptions arrivals, production processes, and resource consumed. By experimenting with time-sensitive and size-based batching, the study evaluates their effects on the reduction of medical and nonmedical waste, thereby contributing to cost savings, reduction of carbon emissions, and productivity by enhancing workflow efficiency. This study offers insights for hospital pharmacies to evaluate batching strategies effectiveness for reducing waste and promoting sustainability. pdfSimulating the dynamic interaction between fleet performance and maintenance processes based on Remaining Useful Life Christoph Werner (Minitab) Program Track: Reliability Modeling and Simulation Program Tags: Python, SIMUL8 Abstract AbstractFleet planning is often challenging. Especially, the dynamic interaction with maintenance entails various uncertainties. Predicting the arrivals into maintenance processes requires an understanding of fleet performance over time while, in turn, delays of repairs severely impact fleet performance and deterioration.
This feedback loop has been neglected so far, which is why we present a novel framework using a ‘rolling window’ machine learning model to predict the inputs into a discrete event simulation (DES) of repair activities based on remaining useful life (RUL). Our ‘fleet tracker’ then uses the DES outputs to simulate fleet performance together with environmental and mission-based factors which form the inputs for predicting RUL. Finally, explainable ML helps decision-makers construct relevant ‘what if’ scenarios. As a motivating example, we consider helicopters in search and rescue missions and their maintenance. As a key result, we compare two scenarios of repair turnaround times and their impact on RUL decline. pdf
Advanced Dynamic Spare Parts Inventory Management Utilizing Machine Health Data Best Contributed Applied Paper - Finalist Jennifer Kruman, Avital Kaufman, and Yale Herer (Technion) Program Track: Data Science and Simulation Program Tags: Python, Supply Chain Abstract AbstractThis research presents a novel approach to spare parts inventory management by integrating real-time machine health data with dynamic, state-dependent inventory policies. Traditional static models overlook the evolving conditions of industrial machinery. Leveraging advanced digital technologies, such as those pioneered by Augury, our framework dynamically adjusts inventory levels, reducing costs and improving service. Using Markov chain modeling, simulation, and industry collaboration, we demonstrate up to 29% cost savings with state-dependent policies over static base-stock models. Sensitivity analysis confirms the robustness of these strategies. pdfAn Agent-Based Framework for Sustainable Perishable Food Supply Chains Maram Shqair (Auburn University); Karam Sweis, Haya Dawkassab, and Safwan Altarazi (German Jordanian University); and Konstantinos Mykoniatis (Auburn University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Input Modeling, Supply Chain Abstract AbstractThis study presents an agent-based modeling framework for enhancing the efficiency and sustainability of perishable food supply chains. The framework integrates forward logistics redesign, reverse logistics, and waste valorization into a spatially explicit simulation environment. It is applied to the tomato supply chain in Jordan, restructuring the centralized market configuration into a decentralized closed loop system with collection points, regional hubs, and biogas units. The model simulates transportation flows, agent interactions, and waste return through retailer backhauls. Simulation results show a 31.1 percent reduction in annual transportation distance and cost, and a 35.9 percent decrease in transportation cost per ton. The proposed approach supports cost-effective logistics and a more equitable distribution of transport burden, particularly by shifting a greater share to retailers. Its modular structure, combined with reliance on synthetic data and scenario flexibility, makes it suitable for evaluating strategies in fragmented, resource-constrained supply chains. pdfAn Empirical Study on the Assessment of Demand Forecasting Reliability for Fabless Semiconductor Companies In-Guk Choi and Seon-Young Hwang (korea advanced institute of science and technology); Jeongsun Ahn, Jehun Lee, and Sanghyun Joo (Korea Advanced Institute of Science and Technology); Kiung Kim, Haechan Lee, and Yoong Song (Samsumg Electronics); and Hyung-Jung Kim (Korea Advanced Institute of Science and Technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Supply Chain Abstract AbstractFabless semiconductor companies—semiconductor design experts without their factories—serve as the essential bridge between sophisticated customer needs and technological innovations, playing a pivotal role in the semiconductor supply chain. At these companies, planning teams receive demand forecasts from the sales team and develop production plans that consider inventory, capacity, and lead time. However, due to the inherent characteristics of the semiconductor industry—high demand volatility, short product cycles, and extended lead times—a substantial gap often exists between sales forecasts and actual demand. Consequently, evaluating forecast reliability is critical for planning teams that rely solely on sales forecasts for production planning. In this paper, we propose a novel machine learning framework that assesses forecast reliability by classifying demand forecasts as either overestimates or underestimates rather than using regression methods. Experimental results confirm its effectiveness in assessing forecast reliability. pdfAnalyzing Implementation for Digital Twins: Implications for a Process Model Annika Hesse (TU Dortmund University) and Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Data Driven, Supply Chain Abstract AbstractFor many companies, digital transformation is an important lever for adapting their work and business processes to constant change, keeping them up-to-date and reactive to changes in the global market. Digital twins are seen as a promising means of holistically transforming production systems and value chains, but despite their potential, there has been a lack of standardized implementation processes, often resulting in efficiency losses. Therefore, this paper aims to empirically identify process models for implementing digital twins through a structured literature review and derive implications for a standardized, widely applicable process model. The literature review is based on vom Brocke’s methodology and focuses on scientific articles from the last years. Based on 211 identified publications, relevant papers were analyzed after applying defined exclusion criteria. The results provide fundamental insights into currently used process models and open perspectives for developing a standardized implementation framework for digital twins. pdfApplying Operating Curve Principles to Non-Manufacturing Processes to gain Efficiency and Effectiveness in Global Semiconductor Supply Chains Niklas Lauter and Hans Ehm (Infineon Technologies AG) and Gerald Reiner (WU Wien) Program Track: MASM: Semiconductor Manufacturing Program Tag: Supply Chain Abstract AbstractIn today's volatile and complex world, non-manufacturing processes are crucial for global agile and, thus, resilient semiconductor supply chains. Examples of non-manufacturing processes include new product samples, fab overarching process flows, and enabling processes. This conceptual paper begins with the hypothesis that operating curve principles from manufacturing can also be applied to non-manufacturing processes. To achieve this, we needed to understand why the principles of operating curves and flow factors lead to efficiency and effectiveness in semiconductor manufacturing. Our goal was to determine how these principles can be applied to broader contexts and specific use cases. The initial experimental results are promising. Furthermore, it is essential to assess how non-manufacturing processes are structured. The paper concludes with examples demonstrating the potential to bridge the efficiency gap between manufacturing and non-manufacturing processes. The ultimate goal is to match both efficiency levels, thus opening new opportunities within global semiconductor supply chains. pdfBridging the Gap: A Practical Guide to Implementing Deep Reinforcement Learning Simulation in Operations Research with Gymnasium Konstantinos Ziliaskopoulos, Alexander Vinel, and Alice E. Smith (Auburn University) Program Track: Introductory Tutorials Program Tags: Neural Networks, Python, Supply Chain Abstract AbstractDeep Reinforcement Learning (DRL) has shown considerable promise in addressing complex sequential decision-making tasks across various fields, yet its integration within Operations Research (OR) remains limited despite clear methodological compatibility. This paper serves as a practical tutorial aimed at bridging this gap, specifically guiding simulation practitioners and researchers through the process of developing DRL environments using Python and the Gymnasium library. We outline the alignment between traditional simulation model components, such as state and action spaces, objective functions, and constraints, and their DRL counterparts. Using an inventory control scenario as an illustrative example, which is also available online through our GitHub repository, we detail the steps involved in designing, implementing, and integrating custom DRL environments with contemporary DRL algorithms. pdfClassical and AI-based Explainability of Ontologies on the Example of the Digital Reference – the Semantic Web for Semiconductor and Supply Chains Containing Semiconductors Marta Bonik (Infineon Technologies AG), Eleni Tsaousi (Harokopio University of Athens), Hans Ehm (Infineon Technologies AG), and George Dimitrakopoulos (Harokopio University of Athens) Program Track: MASM: Semiconductor Manufacturing Program Tags: Conceptual Modeling, Python, Supply Chain Abstract AbstractOntologies are essential for structuring knowledge in complex domains like semiconductor supply chains but often remain inaccessible to non-technical users. This paper introduces a combined classical and AI-based approach to improve ontology explainability, using Digital Reference (DR) as a case study. The first approach leverages classical ontology visualization tools, enabling interactive access and feedback for user engagement. The second integrates Neo4j graph databases and Python with a large language model (LLM)-based architecture, facilitating natural language querying of ontologies. A post-processing layer ensures reliable and accurate responses through query syntax validation, ontology schema verification, fallback templates, and entity filtering. The approach is evaluated with natural language queries, demonstrating enhanced usability, robustness, and adaptability. By bridging the gap between traditional query methods and AI-driven interfaces, this work promotes the broader adoption of ontology-driven systems in the Semantic Web and industrial applications, including semiconductor supply chains. pdfImproving Plan Stability in Semiconductor Manufacturing through Stochastic Optimization: a Case Study Eric Thijs Weijers and Nino Sluijter (NXP Semiconductors N.V., Eindhoven University of Technology); Gijs Hogers (NXP Semiconductors N.V., Tilburg University); Kai Schelthoff (NXP Semiconductors N.V.); and Ivo Adan and Willem van Jaarsveld (Eindhoven University of Technology) Program Track: MASM: Semiconductor Manufacturing Program Tag: Supply Chain Abstract AbstractIn this study, we propose a two-stage stochastic programming method to improve plan stability in semiconductor supply chain master planning in a rolling horizon setting. The two-stage programming model is applied to real-world data from NXP Semiconductors N.V. to assess the quality of generated plans based on the KPIs plan stability, on-time delivery, and inventory position. We also compare the performance of two-stage stochastic programming to linear programming. To model demand uncertainty, we propose to fit distributions to historical demand data from which stochastic demand can be sampled. For modeling supply, we propose an aggregated rolling horizon simulation model of the front-end supply chain. Based on the performed experiments, we conclude that two-stage programming outperforms LP in terms of plan stability, while performing comparably in terms of inventory position and on-time delivery. pdfLLM Assisted Value Stream Mapping Micha Jan Aron Selak, Dirk Krechel, and Adrian Ulges (RheinMain University of Applied Sciences) and Sven Spieckermann, Niklas Stoehr, and Andreas Loehr (SimPlan AG) Program Track: Modeling Methodology Program Tags: Neural Networks, Supply Chain Abstract AbstractThe correct design of digital value stream models is an intricate task, which can be challenging especially for untrained or inexperienced users. We address the question whether large language models can be adapted to "understand" value stream’s structure and act as modeling assistants, which could support users with repairing errors and adding or configuring process steps in order to create valid value stream maps that can be simulated. Specifically, we propose a domain-specific multi-task training process, in which an instruction-tuned large language model is fine-tuned to yield specific information on its input value stream or to fix scripted modeling errors. The resulting model – which we coin Llama-VaStNet – can manipulate value stream structures given user requests in natural language. We demonstrate experimentally that Llama-VaStNet outperforms its domain-agnostic vanilla counterpart, i.e. it is 19% more likely to produce correct individual manipulations. pdfModular Digital Twins: The Foundation for the Factory of the Future Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST); Lasse Jurgeleit (TU Dortmund University); and Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: AnyLogic, Supply Chain Abstract AbstractCompanies face stiff challenges regarding their value chains' circular and digital transformation. Digital Twins are a valuable and powerful tool to ease such transformation. Yet, Digital Twins are not just one virtualized model but several parts with different functions. This paper analyzes Digital Twins’ frameworks and reference models on an architectural level. We derive a modular framework displaying best practices based on empirical data from particular use cases. Hereby, we concentrate on discrete manufacturing processes to leverage benefits for the factory of the future. According to a design science cycle, we also demonstrate and evaluate the modular framework in a real-world application in an assembly line. The study provides an overview of the state-of-the-art for Digital Twin frameworks and shows ways for easy implementation and avenues for further development. As a synthesis of particular architectures, the modular approach offers a novel and thoroughly generizable blueprint for Digital Twins. pdfSimulation-based Analysis of a Hydrogen Infrastructure to Supply a Regional Hydrogen Hub Michael Teucke, Abderrahim Ait Alla, Lennart Steinbacher, Eike Broda, and Michael Freitag (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) Program Track: Agent-based Simulation Program Tags: AnyLogic, Supply Chain Abstract AbstractMany countries plan to adopt hydrogen as a major energy carrier, which requires a robust infrastructure to meet rising demand. This paper presents a simulation model quantitatively analyzing the capacity of a potential hydrogen infrastructure in a coastal region of Northern Germany to supply a hydrogen hub in Bremen. The model covers ship-based imports of hydrogen, either as liquid hydrogen or ammonia, unloading at port terminals, conversion to gaseous hydrogen, pipeline transport to the hub, and end-use consumption. Various scenarios are simulated to quantitatively assess infrastructure needs under projected demand. Results show that ammonia-based imports offer greater supply reliability under low and medium demand, while liquid hydrogen performs better under high demand due to faster unloading times. Demand-driven supply policies generally outperform fixed-interval approaches by maintaining higher storage levels and aligning supply more closely with consumption patterns. pdfSimulation-based Production Planning For An Electronic Manufacturing Service Provider Using Collaborative Planning Gabriel Thurow, Benjamin Rolf, Tobias Reggelin, and Sebastian Lang (Otto-von-Guericke-University Magdeburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: Siemens Tecnomatix Plant Simulation, Supply Chain Abstract AbstractThe growing trend of specialization is significantly increasing the importance of Electronic Manufacturing Services (EMS) providers. Typically, EMS companies operate within global supply networks characterized by high complexity and dynamic interactions between multiple stakeholders. As a consequence, EMS providers frequently experience volatile and opaque procurement and production planning processes. This paper investigates the potential of collaborative planning between EMS providers and their customers to address these challenges. Using discrete-event simulation, we compare traditional isolated planning approaches with collaborative planning strategies. Based on empirical data from an EMS company, our findings highlight the benefits of collaborative planning, particularly in improving inventory management and service levels for EMS providers. We conclude by presenting recommendations for practical implementation of collaborative planning in the EMS industry. pdfSupply Chain Optimization via Generative Simulation and Iterative Decision Policies Haoyue Bai (Arizona State University); Haoyu Wang (NEC Labs America.); Nanxu Gong, Xinyuan Wang, and Wangyang Ying (Arizona State University); Haifeng Chen (NEC Labs America.); and Yanjie Fu (Arizona State University) Program Track: Data Science and Simulation Program Tags: Data Driven, Python, Supply Chain Abstract AbstractHigh responsiveness and economic efficiency are critical objectives in supply chain transportation, both of which are influenced by strategic decisions on shipping mode. An integrated framework combining an efficient simulator with an intelligent decision-making algorithm can provide an observable, low-risk environment for transportation strategy design. An ideal simulation-decision framework must (1) generalize effectively across various settings, (2) reflect fine-grained transportation dynamics, (3) integrate historical experience with predictive insights, and (4) maintain tight integration between simulation feedback and policy refinement. We propose Sim-to-Dec framework to satisfy these requirements. Specifically, Sim-to-Dec consists of a generative simulation module, which leverages autoregressive modeling to simulate continuous state changes, reducing dependence on handcrafted domain-specific rules and enhancing robustness against data fluctuations; and a history–future dual-aware decision model, refined iteratively through end-to-end optimization with simulator interactions. Extensive experiments conducted on three real-world datasets demonstrate that Sim-to-Dec significantly improves timely delivery rates and profit. pdf
A Digital Twin of Water Network for Exploring Sustainable Water Management Strategies Souvik Barat, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd); Gurudas Nulkar and Soomrit Chattopadhyay (Gokhale Institute of Politics and Economics, Pune); and Ashwini Keskar (Pune Knowledge Cluster, Pune) Program Track: Simulation as Digital Twin Program Tags: Emergent Behavior, Python, System Dynamics Abstract AbstractEfficient water management is an increasingly critical challenge for policymakers tasked with ensuring reliable water availability for agriculture, industry and domestic use while mitigating flood risks during monsoon seasons. This challenge is especially pronounced in regions where water networks rely primarily on rain-fed systems. Managing such water ecosystem is complex due to inherent constraints in water source, storage and flow, environmental uncertainties such as variable rainfall and evaporation, and increasing need for urbanization, industrial expansion and equity on interstate water sharing. In this study, we present a stock-and-flow-based simulatable digital twin designed to accurately represent the dynamics of a raindependent water network comprising dams, rivers and associated environmental and usage factors. The model supports scenario-based simulation and the evaluation of mitigation policies to enable evidencebased decision-making. We demonstrate the usefulness of our approach using a real water body network from western India that covers more than 300 km heterogeneous landscape. pdfA Hybrid Simulation-based Approach for Adaptive Production and Demand Management in Competitive Markets S. M. Atikur Rahman, Md Fashiar Rahman, and Tzu-Liang Bill Tseng (The University of Texas at El Paso) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractManaging production, inventory, and demand forecasting in a competitive market is challenging due to consumer behavior and market dynamics. Inefficient forecasting can lead to inadequate inventory, an interrupted production schedule, and eventually, less profit. This study presents a simulation-based decision support framework integrating discrete event simulation (DES) and system dynamics (SD). DES models production and inventory management to ensure optimized resource utilization, while SD is employed to incorporate market dynamics. This model jointly determines demand through purchase decisions from potential users and replacement demand from existing adopters. Further refinements prevent sales declines and sustain long-term market stability. This hybrid simulation approach provides insights into demand evolution and inventory optimization, aiding strategic decision-making. Finally, we propose and integrate a dynamic marketing strategy algorithm with the simulation model, which results in around 38% more demand growth than the existing demand curve. The proposed approach was validated through rigorous experimentation and optimization analysis. pdfA Novel System Dynamics Approach to DC Microgrid Power Flow Analysis Jose González de Durana (University of the Basque Country) and Luis Rabelo and Marwen Elkamel (University of Central Florida) Program Track: Modeling Methodology Program Tags: AnyLogic, Complex Systems, Conceptual Modeling, System Dynamics Abstract AbstractThis paper employs System Dynamics (SD) to model and analyze DC power distribution systems, focusing on methodological development and using microgrids as case studies. The approach follows a bottom-up methodology, starting with the fundamentals of DC systems and building toward more complex configurations. We coin this approach “Power Dynamics,” which uses stocks and flows to represent electrical components such as resistors, batteries, and power converters. SD offers a time-based, feedback-driven approach that captures component behaviors and system-wide interactions. This framework provides computational efficiency, adaptability, and visualization, enabling the integration of control logic and qualitative decision-making elements. Three case studies of microgrids powered by renewable energy demonstrate the framework’s effectiveness in simulating energy distribution, load balancing, and dynamic power flow. The results highlight SD’s potential as a valuable modeling tool for studying modern energy systems, supporting the design of flexible and resilient infrastructures. pdfAgent-based Social Simulation of Spatiotemporal Process-triggered Graph Dynamical Systems Zakaria Mehrab, S.S. Ravi, Henning Mortveit, Srini Venkatramanan, Samarth Swarup, Bryan Lewis, David Leblang, and Madhav Marathe (University of Virginia) Program Track: Agent-based Simulation Program Tags: Complex Systems, Emergent Behavior, System Dynamics Abstract AbstractGraph dynamical systems (GDSs) are widely used to model and simulate realistic multi-agent social dynamics, including societal unrest. This involves representing the multiagent system as a network and assigning functions to each vertex describing how they update their states based on the neighborhood states. However, in many contexts, social dynamics are triggered by external processes, which can affect the state transitions of agents. The classical GDS formalism does not incorporate such processes. We introduce the STP-GDS framework, that allows a GDS to be triggered by spatiotemporal background processes. We present a rigorous definition of the framework followed by formal analysis to estimate the size of the active neighborhood under two types of process distribution. The real-life applicability of the framework is further highlighted by an additional case study involving evacuation due to natural events, where we analyze collective agent behaviors under heterogeneous environmental and spatial settings. pdfExploring Integration of Surrogate Models Through A Case Study on Variable Frequency Drives Dušan Šturek (Karlsruhe Institute of Technology, Danfoss Power Electronics and Drives) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Data Science and Simulation Program Tags: Data Driven, System Dynamics Abstract AbstractHigh-fidelity simulation models of variable frequency drives often incur expensive computation due to high granularity, complex physics and highly stiff components, hindering real-time Digital Twin Industry 4.0 applications. Surrogate models can outperform simulation solvers by orders of magnitude, potentially making real-time virtual drives feasible within practical computational limits. Despite this potential, current surrogate models suffer from limited generalizability and robustness. In this paper, we present an industrial case study exploring the combination of deep learning with surrogate modeling for simulating variable frequency drives, specifically replacing the induction motor high-fidelity component. We investigate the performance of Long-Short Term Memory-based surrogates, examining how their prediction accuracy and training time vary with synthetic datasets of different sizes, and how well the induction motor surrogates generalize across different motor resistances. This initial study aims to establish a foundation for further development, benchmarking and automation of surrogate modeling workflow for simulation enhancement. pdfIntegrating Decision Field Theory Within System Dynamics Framework For Modeling the Adoption Process of Ride Sourcing Services Best Contributed Theoretical Paper - Finalist Seunghan Lee (Mississippi State University) and Jee Eun Kang (University at Buffalo, SUNY) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractThe rise of ride-sourcing services has changed the transportation industry, reshaping urban mobility services. This paper presents an integrated framework of the adoption of ride-sourcing services and its impact on transportation markets using a combined approach of System Dynamics (SD) and Extended-Decision Field Theory (E-DFT). Drawing on data from ride-sourcing platforms such as Uber and Lyft, the study investigates the temporal dynamics and trends of ride-sourcing demand. SD modeling is employed to capture the complex interactions and feedback loops within the ride-sourcing ecosystem at system-level. The integration of System Dynamics and extended DFT allows for a more comprehensive and holistic modeling of the ride-sourcing market. It enables exploration of various scenarios and policy interventions, providing insights into the long-term behavior of the market and facilitating evidence-based decision-making by policymakers and industry stakeholders while accommodating individual users' decisions based on changing preferences and environments. pdfToward Automating System Dynamics Modeling: Evaluating LLMs in the Transition from Narratives to Formal Structures Jhon G. Botello (Virginia Modeling, Analysis, and Simulation Center) and Brian Llinas, Jose Padilla, and Erika Frydenlund (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Conceptual Modeling, Output Analysis, System Dynamics Abstract AbstractTransitioning from narratives to formal system dynamics (SD) models is a complex task that involves identifying variables, their interconnections, feedback loops, and the dynamic behaviors they exhibit. This paper investigates how large language models (LLMs), specifically GPT-4o, can support this process by bridging narratives and formal SD structures. We compare zero-shot prompting with chain-of-thought (CoT) iterations using three case studies based on well-known system archetypes. We evaluate the LLM’s ability to identify the systemic structures, variables, causal links, polarities, and feedback loop patterns. We present both quantitative and qualitative assessments of the results. Our study demonstrates the potential of guided reasoning to improve the transition from narratives to system archetypes. We also discuss the challenges of automating SD modeling, particularly in scaling to more complex systems, and propose future directions for advancing toward automated modeling and simulation in SD assisted by AI. pdfUsing the Tool Command Language for a Flight Simulation Flight Dynamics Model Frank Morlang (Private Person) and Steffen Strassburger (Ilmenau University of Technology) Program Track: Aviation Modeling and Analysis Program Tags: Distributed, System Dynamics Abstract AbstractThis paper introduces a methodology for simulating flight dynamics utilizing the Tool Command Language (Tcl). Tcl, created by John Ousterhout, was conceived as an embeddable scripting language for an experimental Computer Aided Design (CAD) system. Tcl, a mature and maturing language recognized for its simplicity, versatility, and extensibility, is a compelling contender for the integration of flight dynamics functionalities. The work presents an extension method utilizing Tcl's adaptability for a novel type of flight simulation programming. Initial test findings demonstrate performance appropriate for the creation of human-in-the-loop real-time flight simulations. The possibility for efficient and precise modeling of future complicated distributed simulation elements is discussed, and recommendations regarding subsequent development priorities are drawn. pdf
Dialectic Models for Documenting and Conducting Simulation Studies: Exploring Feasibility Steffen Zschaler (King's College London), Pia Wilsdorf (University of Rostock), Thomas Godfrey (Aerogility Ltd), and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Metamodeling, Validation Abstract AbstractValidation and documentation of rationale are central to simulation studies. Most current approaches focus only on individual simulation artifacts---most typically simulation models---and their validity rather than their contribution to the overall simulation study. Approaches that aim to validate simulation studies as a whole either impose structured processes with the implicit assumption that this will ensure validity, or they rely on capturing provenance and rationale, most commonly in natural language, following accepted documentation guidelines. Inspired by dialectic approaches for developing mathematical proofs, we explore the feasibility of capturing validity and rationale information as a study unfolds through agent dialogs that also generate the overall simulation-study argument. We introduce a formal framework, an initial catalog of possible interactions, and a proof-of-concept tool to capture such information about a simulation study. We illustrate the ideas in the context of a cell biological simulation study. pdfGoal-oriented Generation of Simulation Experiments Anja Wolpers, Pia Wilsdorf, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Conceptual Modeling, Java, Validation Abstract AbstractAutomatically generating and executing simulation experiments promises to make running simulation
studies more efficient, less error-prone, and easier to document and replicate. However, during experiment
generation, background knowledge is required regarding which experiments using which inputs and outputs
are useful to the modeler. Therefore, we conducted an interview study to identify what types of experiments
modelers perform during simulation studies. From the interview results, we defined four general goals
for simulation experiments: exploration, confirmation, answering the research question, and presentation.
Based on the goals, we outline and demonstrate an approach for automatically generating experiments by
utilizing an explicit and thoroughly detailed conceptual model. pdfModel Validation and LLM-based Model Enhancement for Analyzing Networked Anagram Experiments Hao He, Xueying Liu, and Xinwei Deng (Virginia Tech) Program Track: Modeling Methodology Program Tags: Neural Networks, Validation Abstract AbstractAgent-based simulations for networked anagram games, often taking advantage of the experimental data, are useful tools to investigate collaborative behaviors. To confidently incorporate the statistical analysis from the experimental data into the ABS, it is crucial to conduct sufficient validation for the underlying statistical models. In this work, we propose a systematic approach to evaluate the validity of statistical methods of players’ action sequence modeling for the networked anagram experiments. The proposed method can appropriately quantify the effect and validity of expert-defined covariates for modeling the players’ action sequence data. We further develop a Large Language Model (LLM)-guided method to augment the covariate set, employing iterative text summarization to overcome token limits. The performance of the proposed methods is evaluated under different metrics tailored for imbalanced data in networked anagram experiments. The results highlight the potential of LLM-driven feature discovery to refine the underlying statistical models used in agent-based simulations. pdfOptimizing Precast Concrete Production: a Discrete-event Simulation Approach with Simphony Julie Munoz, Mohamad Itani, Mohammad Elahi, Anas Itani, and Yasser Mohamed (University of Alberta) Program Track: Project Management and Construction Program Tags: Data Driven, Validation Abstract AbstractPrecast concrete manufacturers increasingly face throughput bottlenecks as market demand rises and curing-area capacity reaches its limit. This paper develops a validated discrete-event simulation (DES) model of a Canadian precast panel plant using the Simphony platform. Field observations, time studies, and staff interviews supply task durations, resource data, and variability distributions. After verification and validation against production logs, two improvement scenarios are tested: (1) doubling curing beds and (2) halving curing time with steam curing. Scenario A reduces total cycle time by 26 %, while Scenario B achieves a 24 % reduction and lowers curing-bed utilization by 5 %. Both scenarios cut crane waiting and queue lengths, demonstrating that relieving the curing bottleneck drives system-wide gains. The study confirms DES as an effective, low-risk decision-support tool for off-site construction, offering plant managers clear, data-driven guidance for investment planning and lean implementation. pdfPreserving Dependencies in Partitioned Digital Twin Models for Enabling Modular Validation Ashkan Zare (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Validation Abstract AbstractLeveraging Digital Twins, as near real-time replicas of physical systems, can help identify inefficiencies and optimize production in manufacturing systems. Digital Twins’ effectiveness, however, relies on continuous validation of the underlying models to ensure accuracy and reliability, which is particularly challenging for complex, multi-component systems where different components evolve at varying rates. Modular validation mitigates this challenge by decomposing models into smaller sub-models, allowing for tailored validation strategies. A key difficulty in this approach is preserving the interactions and dependencies among the sub-models while validating them individually; isolated validation may yield individually valid sub-models while failing to ensure overall model consistency. To address this, we build on our previously proposed modular validation framework and introduce an approach that enables sub-model validation while maintaining interdependencies. By ensuring that the validation process reflects these dependencies, our method enhances the effectiveness of Digital Twins in dynamic manufacturing environments. pdfSimulating Front-End Semiconductor Supply Chains to assess Master Plans under Uncertainty: a Case Study Aaron Joël Sieders and Cas Rosman (NXP Semiconductors N.V.), Collin Drent (Eindhoven University of Technology), and Alp Akcay (Northeastern University) Program Track: MASM: Semiconductor Manufacturing Program Tag: Validation Abstract AbstractThis research presents an aggregated simulation model for the front-end semiconductor supply chain to assess master plans, focusing on the impact of demand and supply uncertainties on the key performance indicators on-time delivery and inventory on hand. Supply uncertainty is modeled using discrete distributions of historical cycle times, incorporating load-dependent cycle times through a non-linear regression model. To model demand uncertainty, we use future forecasts and adjust them by sampling from distributions of historical forecast percentage errors. By comparing master plan performance under uncertain conditions with those from deterministic scenarios, the model provides valuable insights into how these uncertainties influence supply chain performance. Using data from NXP Semiconductors N.V., a Dutch semiconductor
manufacturing and design company, we demonstrate the model’s applicability and offer practical guidance for industry practitioners. Based on numerical experiments, we conclude that the impact of demand and supply uncertainty significantly differs compared to deterministic planning. pdf
Central Limit Theorem for a Randomized Quasi-Monte Carlo Estimator of a Smooth Function of Means Marvin K. Nakayama (New Jersey Institute of Technology), Bruno Tuffin (Inria), and Pierre L'Ecuyer (Université de Montréal) Program Track: Analysis Methodology Program Tags: Monte Carlo, Output Analysis, Variance Reduction Abstract AbstractConsider estimating a known smooth function (such as a ratio) of unknown means. Our paper accomplishes this by first estimating each mean via randomized quasi-Monte Carlo and then evaluating the function at the estimated means. We prove that the resulting plug-in estimator obeys a central limit theorem by first establishing a joint central limit theorem for a triangular array of estimators of the vector of means and then employing the delta method. pdfFast Monte Carlo Irene Aldridge (Cornell University) Program Track: Modeling Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractThis paper proposes an eigenvalue-based small-sample approximation of the celebrated Markov Chain Monte Carlo that delivers an invariant steady-state distribution that is consistent with traditional Monte Carlo methods. The proposed eigenvalue-based methodology reduces the number of paths required for Monte Carlo from as many as 1,000,000 to as few as 10 (depending on the simulation time horizon T), and delivers comparable, distributionally robust results, as measured by the Wasserstein distance. The proposed methodology also produces a significant variance reduction in the steady-state distribution. pdfImportance Sampling for Latent Dirichlet Allocation Best Contributed Theoretical Paper - Finalist Paul Glasserman and Ayeong Lee (Columbia University) Program Track: Analysis Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractLatent Dirichlet Allocation (LDA) is a method for finding topics in text data. Evaluating an LDA model entails estimating the expected likelihood of held-out documents. This is commonly done through Monte Carlo simulation, which is prone to high relative variance. We propose an importance sampling estimator for this problem and characterize the theoretical asymptotic statistical efficiency it achieves in large documents. We illustrate the method in simulated data and in a dataset of news articles. pdfUsing Adaptive Basis Search Method In Quasi-Regression To Interpret Black-Box Models Ambrose Emmett-Iwaniw and Christiane Lemieux (University of Waterloo) Program Track: Analysis Methodology Program Tags: Monte Carlo, R, Variance Reduction Abstract AbstractQuasi-Regression (QR) is an inference method that approximates a function of interest (e.g., black-box model) for interpretation purposes by a linear combination of orthonormal basis functions of $L^2[0,1]^{d}$. The coefficients are integrals that do not have an analytical solution and therefore must be estimated, using Monte Carlo or Randomized Quasi-Monte Carlo (RQMC). The QR method can be time-consuming if the number of basis functions is large. If the function of interest is sparse, many of these basis functions are irrelevant and could thus be removed, but they need to be correctly identified first. We address this challenge by proposing new adaptive basis search methods based on the RQMC method that adaptively select important basis functions. These methods are shown to be much faster than previously proposed QR methods and are overall more efficient. pdfWhen Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach Ruoning Zhao and Xinyun Chen (The Chinese University of Hong Kong, Shenzhen) Program Track: Data Science and Simulation Program Tags: Rare Events, Sampling, Variance Reduction Abstract AbstractDriven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods. pdf
|