Track Coordinator - Advanced Tutorials: Javier Faulin (Institute of Smart Cities, Public University of Navarre), Giulia Pedrielli (Arizona State University) Advanced TutorialsInside Digital Twins: What They Are and How They Work Session Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Inside Digital Twins: What They Are and How They Work Steffen Strassburger (Ilmenau University of Technology) Program Track: Advanced Tutorials Abstract AbstractThis tutorial gives an overview about the state-of-the-art of Digital Twins (DTs). We discuss what they are, what purposes they may fulfill, how they differ from related technologies, and how they may work internally. Application-wise, we focus on Digital Twins of production and logistics systems, but the majority of our explanations should also be valid beyond these domains. Finally, we discuss open issues and potential directions of future research. pdf Advanced TutorialsAI-Empowered Data-Driven Agent-Based Modeling and Simulation: Challenges, Methodologies, and Future Perspectives Session Chair: Alp Akcay (Northeastern University) AI-Empowered Data-Driven Agent-Based Modeling and Simulation: Challenges, Methodologies, and Future Perspectives Bhakti Stephan Onggo (University of Southampton, CORMSIS); Zhou He (University of Chinese Academy of Sciences); Peng Lu (Central South University); and Quan Bai and Yuxuan Hu (University of Tasmania) Program Track: Advanced Tutorials Abstract AbstractAgent-based modeling and simulation (ABMS) has become one of the most popular simulation methods for scientific research and real-world applications. This tutorial paper explores recent development in the use of artificial intelligence including large-language models and machine learning, and digital twin in ABMS research. Given the different perspectives on ABMS, this paper will start with ABMS basic concepts and their implementation using an online platform called AgentBlock.net. pdf Advanced TutorialsSimulation Optimization and Stochastic Gradients: Theory & Practice Session Chair: Jie Xu (George Mason University) Simulation Optimization and Stochastic Gradients: Theory & Practice Michael Fu (University of Maryland, College Park); Jiaqiao Hu (Stony Brook University, State University of New York); Ilya Ryzhov (University of Maryland, College Park); and Enlu Zhou (Georgia Institute of Technology) Program Track: Advanced Tutorials Abstract AbstractThis tutorial addresses the use of stochastic gradients in simulation optimization, including methodology and algorithms, theoretical convergence analysis, and applications. Specific topics include stochastic gradient estimation techniques -- both direct unbiased and indirect finite-difference-based; stochastic approximation algorithms and their convergence rates;
stochastic gradient descent (SGD) algorithms with momentum and reusing past samples via importance sampling; and a real-world application in geographical partitioning. pdf Advanced TutorialsA Comprehensive Approach to Optimizing Complex Systems Session Chair: Giulia Pedrielli (Arizona State University) A Comprehensive Approach to Optimizing Complex Systems Zelda Zabinsky and Pariyakorn Maneekul (University of Washington) Program Track: Advanced Tutorials Abstract AbstractOptimization viewed broadly can aid in designing, analyzing, and operating complex systems, from strategic policy planning to last-mile distribution to optimal control of dynamical systems. The optimization model, including decision variables, objective functions, and constraints, requires performance metrics that are often evaluated via black-box simulations. We summarize algorithms that can address black-box noisy functions, mixed integer- and real-valued variables, and multiple objectives. Multiple models (e.g., Gaussian processes, neural networks, queueing networks) can be used in conjunction with a computationally expensive model (e.g., simulation) to predict performance and reduce overall computation. A key issue in solving an optimization model is to dynamically allocate computational effort to efficiently search for the global optimum. The dilemma of exploration vs exploitation vs estimation is evident in machine learning and global optimization. We discuss sampling distributions that provide insights into this balancing act, and how ideas in quantum optimization provide approaches to optimizing complex systems. pdf Advanced TutorialsConnecting Quantum Computing with Classical Stochastic Simulation Session Chair: Zelda Zabinsky (University of Washington) Connecting Quantum Computing with Classical Stochastic Simulation Jose Blanchet (Stanford University), Mark Squillante (IBM T. J. Watson Research Center), and Mario Szegedy and Guanyang Wang (Rutgers University) Program Track: Advanced Tutorials Abstract AbstractThis tutorial introduces quantum approaches to Monte Carlo computation with applications in computational finance. We outline the basics of quantum computing using Grover's algorithm for unstructured search to build intuition. We then move slowly to amplitude estimation problems and applications to counting and Monte Carlo integration, again using Grover-type iterations. A hands-on Python/Qiskit implementation illustrates these concepts applied to finance. The paper concludes with a discussion on current challenges in scaling quantum simulation techniques. pdf Advanced TutorialsAdaptive Intelligence: Combining Digital Twin Precision with AI Foresight for Real-Time Decision Making Session Chair: Enlu Zhou (Georgia Institute of Technology) Adaptive Intelligence: Combining Digital Twin Precision with AI Foresight for Real-Time Decision Making Chun-Hung Chen and Jie Xu (George Mason University) and Enver Yucesan (INSEAD) Program Track: Advanced Tutorials Abstract AbstractDigital Twins, virtual models that accurately replicate the dynamic behavior of physical systems, are on the cusp of a revolutionary transformation not only for the design but also for real-time control of dynamic systems. In this tutorial, we define the key building blocks of Digital Twins and explore the key challenges they face in enabling real-time decision making. We discuss adaptive intelligence, an innovative reasoning framework that combines AI-driven approaches with simulation optimization techniques to enhance real-time decision making capabilities –hence, the value– of Digital Twins. pdf Advanced TutorialsMethods of Plausible Inference: The Definitive Cookbook Session Chair: Jingtao Zhang (Virginia Tech) Methods of Plausible Inference: The Definitive Cookbook Jinbo Zhao (Texas A&M University), Gregory Keslin (New Jersey Institute of Technology), David Eckman (Texas A&M University), and Barry Nelson (Northwestern University) Program Track: Advanced Tutorials Abstract AbstractThis tutorial introduces a new cuisine to the simulationist kitchen: plausible inference, the culinary art of output analysis that concocts statistical inferences about possibly unsimulated settings (e.g., solutions, parameters, decision variables) by blending and baking experiment design and problem structure. Tasty applications include screening out settings with undesirable or uninteresting performance or constructing confidence intervals and regions for a setting's measure(s) of performance. This tutorial synthesizes several disparate works on plausible inference into a cohesive, unifying framework with the aim of demonstrating how plausible inference recipes can satisfy a range of analyst appetites. Bon appetit! pdf
Track Coordinator - Agent-Based Simulation: Andrew J. Collins (Old Dominion University), Martijn Mes (University of Twente) AnyLogic, C++, Complex Systems, Output Analysis, Supply Chain, Agent-based SimulationAgent-Based Applications Session Chair: Martijn Mes (University of Twente) Optimizing Task Scheduling in Primary Healthcare: a Reinforcement Learning Approach with Agent-based Simulation Cristián Cárdenas (Universidad Austral de Chile), Gabriel Bustamante and Hernan Pinochet (Universidad de Santiago de Chile), Veronica Gil-Costa (Universidad Nacional de San Luis), Luis Veas-Castillo (Universidad Austral de Chile), and Mauricio Marin (Universidad de Santiago de Chile) Program Track: Agent-based Simulation Program Tag: C++ Abstract AbstractThe integration of Agent-Based Simulation (ABS) and Reinforcement Learning (RL) has emerged as a promising and effective approach for supporting decision-making in medical and hospital settings. This study proposes a novel framework that combines an Agent-Based Simulation with a Double Deep Q-Network (DDQN) Reinforcement Learning model to optimize task scheduling of healthcare professionals responsible for elderly patient care. Simulations were conducted over a 365-day period involving 250 patients, each managed by a healthcare coordinator who schedules appointments. Patients autonomously decide whether to attend appointments and adhere to medical recommendations. Results show the effectiveness of the RL model in minimizing health risks, with 84.8% of patients maintaining or improving their initial health risk levels, while only 15.2% experienced an increase. pdfSimulation-based Analysis of a Hydrogen Infrastructure to Supply a Regional Hydrogen Hub Michael Teucke, Abderrahim Ait Alla, Lennart Steinbacher, Eike Broda, and Michael Freitag (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) Program Track: Agent-based Simulation Program Tags: AnyLogic, Supply Chain Abstract AbstractMany countries plan to adopt hydrogen as a major energy carrier, which requires a robust infrastructure to meet rising demand. This paper presents a simulation model quantitatively analyzing the capacity of a potential hydrogen infrastructure in a coastal region of Northern Germany to supply a hydrogen hub in Bremen. The model covers ship-based imports of hydrogen, either as liquid hydrogen or ammonia, unloading at port terminals, conversion to gaseous hydrogen, pipeline transport to the hub, and end-use consumption. Various scenarios are simulated to quantitatively assess infrastructure needs under projected demand. Results show that ammonia-based imports offer greater supply reliability under low and medium demand, while liquid hydrogen performs better under high demand due to faster unloading times. Demand-driven supply policies generally outperform fixed-interval approaches by maintaining higher storage levels and aligning supply more closely with consumption patterns. pdfEvaluating Comprehension of Agent-Based Social Simulation Visualization Techniques: A Framework Based on Statistical Literacy and Cognitive Processing Kotaro Ohori and Kyoko Kageura (Toyo University) and Shohei Yamane (Fujitsu Ltd.) Program Track: Agent-based Simulation Program Tags: Complex Systems, Output Analysis Abstract AbstractAgent-based social simulation (ABSS) has gained attention as a powerful method for analyzing complex social phenomena. However, the visualization of ABSS outputs is often difficult to interpret for users without expertise in ABSS modeling. This study analyzes how statistical literacy affects the comprehension of ABSS visualizations, based on cognitive processes defined in educational psychology. A web-based survey using five typical visualizations based on Schelling’s segregation model was conducted in Japan. The results showed a moderate positive correlation between statistical literacy and visualization comprehension, while some visualizations remained difficult to interpret even for participants with high literacy. Further machine learning analysis revealed that model performance varied by cognitive stage, and that basic and applied statistical skills had different impacts on comprehension across stages. These findings provide a foundation for designing visualizations tailored to user characteristics and offer insights for effective communication based on ABSS. pdf Complex Systems, Emergent Behavior, Neural Networks, Open Source, Python, System Dynamics, Agent-based SimulationAgent-Based Population Session Chair: Hamdi Kavak (George Mason University) Hierarchical Population Synthesis Using a Neural-Differentiable Programming Approach Imran Mahmood Q. Hashmi, Anisoara Calinescu, and Michael Wooldridge (University of Oxford) Program Track: Agent-based Simulation Program Tags: Complex Systems, Neural Networks, Open Source, Python Abstract AbstractAdvances in Artificial Intelligence have enabled more accurate and scalable modelling of complex social systems, which depend on realistic high-resolution population data. We introduce a novel methodology for generating hierarchical synthetic populations using differentiable programming, producing detailed demographic structures essential for simulation and analysis. Existing approaches struggle to model hierarchical population structures and optimise over discrete demographic attributes. Leveraging feed-forward neural networks and Gumbel-Softmax encoding, our approach transforms aggregated census and survey data into continuous, differentiable forms, enabling gradient-based optimisation to match target demographics with high fidelity. The framework captures multi-scale population structures, including household composition and socio-economic diversity, with verification via logical rules and validation against census cross tables. A UK case study shows our model closely replicates real-world distributions. This scalable approach provides simulation modellers and analysts with, high-fidelity synthetic populations as input for agent-based simulations of complex societal systems, enabling behaviour simulation, intervention evaluation, and demographic analysis. pdfQuantitative Comparison of Population Synthesis Techniques David Han (Cornell University), Samiul Islam and Taylor Anderson (George Mason University), Andrew T. Crooks (University at Buffalo), and Hamdi Kavak (George Mason University) Program Track: Agent-based Simulation Abstract AbstractSynthetic populations serve as the building blocks for predictive models in many domains, including transportation, epidemiology, and public policy. Therefore, using realistic synthetic populations is essential in these domains. Given the wide range of available techniques, determining which methods are most effective can be challenging. In this study, we investigate five synthetic population generation techniques in parallel to synthesize population data for various regions in North America. Our findings indicate that iterative proportional fitting (IPF) and conditional probabilities techniques perform best in different regions, geographic scales, and with increased attributes. Furthermore, IPF has lower implementation complexity, making it an ideal technique for various population synthesis tasks. We documented the evaluation process and shared our source code to enable further research on advancing the field of modeling and simulation. pdfAgent-based Social Simulation of Spatiotemporal Process-triggered Graph Dynamical Systems Zakaria Mehrab, S.S. Ravi, Henning Mortveit, Srini Venkatramanan, Samarth Swarup, Bryan Lewis, David Leblang, and Madhav Marathe (University of Virginia) Program Track: Agent-based Simulation Program Tags: Complex Systems, Emergent Behavior, System Dynamics Abstract AbstractGraph dynamical systems (GDSs) are widely used to model and simulate realistic multi-agent social dynamics, including societal unrest. This involves representing the multiagent system as a network and assigning functions to each vertex describing how they update their states based on the neighborhood states. However, in many contexts, social dynamics are triggered by external processes, which can affect the state transitions of agents. The classical GDS formalism does not incorporate such processes. We introduce the STP-GDS framework, that allows a GDS to be triggered by spatiotemporal background processes. We present a rigorous definition of the framework followed by formal analysis to estimate the size of the active neighborhood under two types of process distribution. The real-life applicability of the framework is further highlighted by an additional case study involving evacuation due to natural events, where we analyze collective agent behaviors under heterogeneous environmental and spatial settings. pdf AnyLogic, Complex Systems, Data Analytics, Netlogo, Agent-based SimulationAgent-Based Transportation Session Chair: Eric Reynolds (Motlow State Community College) Self-Organization in Crowdsourced Food Delivery Systems Berry Gerrits and Martijn Mes (University of Twente) Program Track: Agent-based Simulation Program Tags: Complex Systems, Netlogo Abstract AbstractThis paper presents an open-source agent-based simulation model to study crowd-sourced last-mile food delivery. Within this context, we focus on a system that allows couriers with varying degrees of autonomy and cooperativeness to make decisions about accepting orders and strategically relocating. We model couriers as agents in an agent-based simulation model implemented in NetLogo. Our approach provides the necessary parameters to control and balance system performance in terms of courier productivity and delivery efficiency. Our simulation results show that moderate levels of autonomy and cooperation lead to improved performance, with significant gains in workload distribution and responsiveness to changing demand patterns. Our findings highlight the potential of self-organizing and decentralized strategies to improve scalability, adaptability, and fairness in platform-based food delivery logistics. pdfImpact of Battery Electric Trucks on Intermodal Freight Transportation - An Agent-based Simulation Study Eric Reynolds (Motlow State Community College), Nasim Nezamoddini (Oakland University), and Mustafa Can Camur and Xueping Li (University of Tennessee) Program Track: Agent-based Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractThis paper applies an agent-based simulation model to examine the feasibility of battery electric trucks (BETs) in intermodal freight transportation, focusing on the Memphis hub network. Two infrastructure deployment stages, depot charging only and depot plus destination charging, are modeled and simulated using AnyLogic platform to study truck utilization patterns. Real-world manufacturing sites are chosen, and the trucks are routed along roadways using a Geographic Information System (GIS) map. Battery charge levels and charging infrastructure are modeled under both scenarios. Four electric truck models from various manufacturers including Tesla Semi, Nikola Tre, Volvo VNR, and Freightliner eCascadia are compared in terms of performance and utilization. Results showed that battery electric trucks are a feasible solution for intermodal trucking operations and transporting goods from manufacturers to destinations. This comparison also highlights effects of changing shifts and adding opportunity charging at destinations on truck utilization under different battery efficiencies and capacities. pdf
Track Coordinator - Analysis Methodology: Dohyun Ahn (The Chinese University of Hong Kong), Ben Feng (University of Waterloo), Eunhye Song (Georgia Institute of Technology) Monte Carlo, Rare Events, Sampling, Analysis MethodologyRisk and Quantile Estimation Session Chair: Dongjoon Lee (Georgia Institute of Technology) Computing Estimators of a Quantile and Conditional Value-at-Risk Sha Cao, Truong Dang, James M. Calvin, and Marvin K. Nakayama (New Jersey Institute of Technology) Program Track: Analysis Methodology Program Tags: Monte Carlo, Rare Events, Sampling Abstract AbstractWe examine various sorting and selection methods for computing quantile and the conditional value-at-risk, two of the most commonly used risk measures in risk management scenarios. We study the situation where simulation data is already pre-generated, and perform timing experiments on calculating risk measures on the existing datasets. Through numerical experiments, approximate analyses, and existing theoretical results, we find that selection generally outperforms sorting, but which selection strategy runs fastest depends on several factors. pdfConstructing Confidence Intervals for Value-at-Risk via Nested Simulation Qianwen Zhu, Guangwu Liu, and Xianyu Kuang (City University of Hong Kong) Program Track: Analysis Methodology Abstract AbstractNested simulation is a powerful tool for estimating widely-used risk measures, such as Value-at-Risk (VaR). While point estimation of VaR has been extensively studied in the literature, the topic of interval estimation remains comparatively underexplored. In this paper, we present a novel nested simulation procedure for constructing confidence intervals (CIs) for VaR with statistical guarantees. The proposed procedure begins by generating a set of outer scenarios, followed by a screening process that retains only a small subset of scenarios likely to result in significant portfolio losses. For each of these retained scenarios, inner samples are drawn, and the minimum and maximum means from these scenarios are used to construct the CI. Theoretical analysis confirms the asymptotic coverage probability of the resulting CI, ensuring its reliability. Numerical experiments validate the method, demonstrating its high effectiveness in practice. pdfA Fixed-Sample-Size Procedure For Estimating Steady-State Quantiles Based On Independent Replications Athanasios Lolos (Navy Federal Credit Union), Christos Alexopoulos and David Goldsman (Georgia Institute of Technology), Kemal Dinçer Dingeç (Gebze Technical University), Anup C. Mokashi (Memorial Sloan Kettering Cancer Center), and James R. Wilson (North Carolina State University) Program Track: Analysis Methodology Abstract AbstractWe introduce the first fully automated fixed-sample-size procedure (FIRQUEST) for computing confidence intervals (CIs) for steady-state quantiles based on independent replications. The user provides a dataset from a number of independent replications of arbitrary size and specifies the required quantile and nominal coverage probability of the anticipated CI. The proposed method is based on the simulation analysis methods of batching, standardized time series (STS), and sectioning. Preliminary experimentation with the waiting-time process in an M/M/1 queueing
system showed that FIRQUEST performed well by appropriately handling initialization effects and delivering CIs with estimated coverage probability close to the nominal level. pdf Monte Carlo, Variance Reduction, Analysis MethodologyImportance Sampling and Algorithms Session Chair: Qianwen Zhu (City University of Hong Kong) Importance Sampling for Latent Dirichlet Allocation Best Contributed Theoretical Paper - Finalist Paul Glasserman and Ayeong Lee (Columbia University) Program Track: Analysis Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractLatent Dirichlet Allocation (LDA) is a method for finding topics in text data. Evaluating an LDA model entails estimating the expected likelihood of held-out documents. This is commonly done through Monte Carlo simulation, which is prone to high relative variance. We propose an importance sampling estimator for this problem and characterize the theoretical asymptotic statistical efficiency it achieves in large documents. We illustrate the method in simulated data and in a dataset of news articles. pdfExact Importance Sampling for a Linear Hawkes Process Alexander Shkolnik (University of California Santa Barbara) and Liang Feng (University of California, Santa Barbara) Program Track: Analysis Methodology Abstract AbstractWe develop exact importance sampling schemes for (linear) Hawkes processes to efficiently compute their tail probabilities. We show that the classical approach of exponential twisting, while conceptually simple to apply, leads to simulation estimators that are difficult to implement. These difficulties manifest in either a simulation bias or unreasonable computational costs. We mimic exponential twisting with an exponential martingale approach to achieve identical variance reduction guarantees but without the aforementioned challenges. Numerical tests compare the two, and present benchmarks against plain Monte Carlo. pdfAn Efficient Bipartite Graph Sampling Algorithm with Prescribed Degree Sequences Tong Sun and Jianshu Hao (Harbin Institute of Technology), Zhiyang Zhang (Harbin University of Science and Technology), and Guangxin Jiang (Harbin Institute of Technology) Program Track: Analysis Methodology Abstract AbstractThe structure of financial networks plays a crucial role in managing financial risks, particularly in the assessment of systemic risk. However, the true structure of these networks is often difficult to observe directly. This makes it essential to develop methods for sampling possible network configurations based on partial information, such as node degree sequences. In this paper, we consider the problem of sampling bipartite graphs (e.g., bank-asset networks) under such partial information. We first derive exact bounds on the number of nodes that can be connected at each step, given a prescribed degree sequence. Building on these bounds, we then introduce a weighted-balanced random sampling algorithm for generating bipartite graphs that are consistent with the observed degrees, and illustrate how the algorithm works through an example. In addition, we demonstrate the effectiveness of the proposed algorithm through numerical experiments. pdf Analysis MethodologyData-Driven Methods for Estimation and Control Session Chair: Dohyun Ahn (The Chinese University of Hong Kong) Adversarial Reinforcement Learning: A Duality-Based Approach to Solving Optimal Control Problems Nan Chen, Mengzhou Liu, Xiaoyan wang, and Nanyi Zhang (The Chinese University of Hong Kong) Program Track: Analysis Methodology Abstract AbstractWe propose an adversarial deep reinforcement learning (ADRL) algorithm for high-dimensional stochastic control problems. Inspired by the information relaxation duality, ADRL reformulates the control problem as a min-max optimization between policies and adversarial penalties, enforcing non-anticipativity while preserving optimality. Numerical experiments demonstrate ADRL’s superior performance to yield tight dual gaps. Our results highlight ADRL’s potential as a robust computational framework for high-dimensional stochastic control in simulation-based optimization contexts. pdfData-driven Estimation of Tail Probabilities under Varying Distributional Assumptions Dohyun Ahn (Chinese University of Hong Kong) and Sandeep Juneja, Tejas Pagare, and Shreyas Samudra (Ashoka University) Program Track: Analysis Methodology Abstract AbstractWe consider estimating $p_x=P(X>x)$ in a data-driven manner or through simulation, when $x$ is large and when independent samples of $X$ are available. Naively, this involves generating $O(p_x^{-1})$ samples. Making distributional assumptions on $X$ reduces the sample complexity under commonly used distributional parameter estimators. It equals $O(\log p_x^{-1})$ for Gaussian distribution with unknown mean and known variance, and $O(\log^2 p_x^{-1})$ when the variance is also unknown and when the distribution is either exponential or Pareto. We also critically examine the more sophisticated assumption that the data belong to the domain of attraction of the Fréchet distribution allowing estimation methods from extreme value theory (EVT) . Our sobering conclusion based on sample complexity analysis and numerical experiments is that under these settings errors from estimation can be significant so that for probabilities as low as $10^{-6}$, naive methods may be preferable to those based on EVT. pdfControl Variates Beyond Mean: Variance Reduction for Nonlinear Statistical Quantities Henry Lam and Zitong Wang (Columbia University) Program Track: Analysis Methodology Abstract AbstractControl variate (CV) is a powerful Monte Carlo variance reduction technique by injecting mean information about an auxiliary related variable to the simulation output. Partly due to this way of leveraging information, CV has been mostly studied in the context of mean estimation. In this paper, we study CV for general nonlinear quantities such as conditional value-at-risk and stochastic optimization. While we can extend the tools from mean estimation, a challenge in nonlinear generalizations is the proper calibration of the CV coefficient, which deviates from standard linear regression estimators and requires influence function or resampling. As a key contribution, we offer a general methodology that bypasses this challenge by harnessing a weighted representation of CV and interchanging weights between the empirical distribution and the nonlinear functional. We provide theoretical results in the form of central limit theorems to illustrate our performance gains and numerically demonstrate them with several experiments. pdf Monte Carlo, Output Analysis, R, Variance Reduction, Analysis MethodologyQuasi-Monte Carlo and Functional Methods Session Chair: Ayeong Lee (Columbia University) Central Limit Theorem for a Randomized Quasi-Monte Carlo Estimator of a Smooth Function of Means Marvin K. Nakayama (New Jersey Institute of Technology), Bruno Tuffin (Inria), and Pierre L'Ecuyer (Université de Montréal) Program Track: Analysis Methodology Program Tags: Monte Carlo, Output Analysis, Variance Reduction Abstract AbstractConsider estimating a known smooth function (such as a ratio) of unknown means. Our paper accomplishes this by first estimating each mean via randomized quasi-Monte Carlo and then evaluating the function at the estimated means. We prove that the resulting plug-in estimator obeys a central limit theorem by first establishing a joint central limit theorem for a triangular array of estimators of the vector of means and then employing the delta method. pdfUsing Adaptive Basis Search Method In Quasi-Regression To Interpret Black-Box Models Ambrose Emmett-Iwaniw and Christiane Lemieux (University of Waterloo) Program Track: Analysis Methodology Program Tags: Monte Carlo, R, Variance Reduction Abstract AbstractQuasi-Regression (QR) is an inference method that approximates a function of interest (e.g., black-box model) for interpretation purposes by a linear combination of orthonormal basis functions of $L^2[0,1]^{d}$. The coefficients are integrals that do not have an analytical solution and therefore must be estimated, using Monte Carlo or Randomized Quasi-Monte Carlo (RQMC). The QR method can be time-consuming if the number of basis functions is large. If the function of interest is sparse, many of these basis functions are irrelevant and could thus be removed, but they need to be correctly identified first. We address this challenge by proposing new adaptive basis search methods based on the RQMC method that adaptively select important basis functions. These methods are shown to be much faster than previously proposed QR methods and are overall more efficient. pdf Data Analytics, DOE, Metamodeling, Output Analysis, R, Analysis Methodology, Scientific AI and ApplicationsAnalysis Methods and Algorithms Session Chair: Rafael Mayo-García (CIEMAT) Exploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfPreconditioning a Restarted GMRES Solver Using the Randomized SVD José A. Moríñigo, Andrés Bustos, and Rafael Mayo-García (CIEMAT) Program Track: Scientific AI and Applications Abstract AbstractThe performance of a CPU-only implementation of the restarted GMRES algorithm with direct randomized-SVD -based preconditioning has been analyzed. The method has been tested on a set of sparse and dense matrices exhibiting varying spectral properties and compared to the ILU(0) -based preconditioning. This comparison aims to assess the advantages and drawbacks of both approaches. The trade-off between iteration-to-solution and time-to-solution metrics is discussed, demonstrating that the proposed method achieves an improved convergence rate in terms of iterations. Additionally, the method’s competitiveness with respect to both metrics is discussed within the context of several relevant scenarios, particularly those where GMRES-based simulation techniques are applicable. pdf Analysis MethodologyAnalysis for Simulation Optimization Session Chair: Tong Sun (Harbin Institute of Technology) Enhanced Derivative-Free Optimization Using Adaptive Correlation-Induced Finite Difference Estimators Guo Liang (Renmin University of China), Guangwu Liu (City University of Hong Kong), and Kun Zhang (Renmin University of China) Program Track: Analysis Methodology Abstract AbstractGradient-based methods are well-suited for derivative-free optimization (DFO), where finite-difference (FD) estimates are commonly used as gradient surrogates. Traditional stochastic approximation methods, such as Kiefer-Wolfowitz (KW) and simultaneous perturbation stochastic approximation (SPSA), typically utilize only two samples per iteration, resulting in imprecise gradient estimates and necessitating diminishing step sizes for convergence. In this paper, we combine a batch-based FD estimate and an adaptive sampling strategy, developing an algorithm designed to enhance DFO in terms of both gradient estimation efficiency and sample efficiency. Furthermore, we establish the consistency of our proposed algorithm and demonstrate that, despite using a batch of samples per iteration, it achieves the same sample complexity as the KW and SPSA methods. Additionally, we propose a novel stochastic line search technique to adaptively tune the step size in practice. Finally, comprehensive numerical experiments confirm the superior empirical performance of the proposed algorithm. pdfRegular Tree Search for Simulation Optimization Du-Yi Wang (City University of Hong Kong, Renmin University of China); Guo Liang (Renmin University of China); Guangwu Liu (City University of Hong Kong); and Kun Zhang (Renmin University of China) Program Track: Analysis Methodology Abstract AbstractTackling simulation optimization problems with non-convex objective functions remains a fundamental challenge in operations research. In this paper, we propose a class of random search algorithms, called Regular Tree Search, which integrates adaptive sampling with recursive partitioning of the search space. The algorithm concentrates simulations on increasingly promising regions by iteratively refining a tree structure. A tree search strategy guides sampling decisions, while partitioning is triggered when the number of samples in a leaf node exceeds a threshold that depends on its depth. Furthermore, a specific tree search strategy, Upper Confidence Bounds applied to Trees (UCT), is employed in the Regular Tree Search. We prove global convergence under sub-Gaussian noise, based on assumptions involving the optimality gap, without requiring continuity of the objective function. Numerical experiments confirm that the algorithm reliably identifies the global optimum and provides accurate estimates of its objective value. pdfNested Superlevel Set Estimation for Simulation Optimization under Parameter Uncertainty Dongjoon Lee and Eunhye Song (Georgia Institute of Technology) Program Track: Analysis Methodology Abstract AbstractThis paper addresses the challenge of reliably selecting high-performing solutions in simulation optimization when model parameters are uncertain. We infer the set of solutions whose probabilities of performing better than a user-defined threshold are above a confidence level given the uncertainty about the parameters. We show that this problem can be formulated as a nested superlevel set estimation problem and propose a sequential sampling framework that models the simulation output mean as a Gaussian process (GP) defined on the solution and parameter spaces. Based on the GP model, we introduce a set estimator and an acquisition function that evaluates the expected number of solutions whose set classifications change should each solution-parameter pair be sampled. We also provide approximation schemes to make the acquisition function computation more efficient. Based on these, we propose a sequential sampling algorithm that effectively reduces the set estimation error and empirically demonstrate its performance. pdf Data Driven, Python, Ranking and Selection, Analysis Methodology, Uncertainty Quantification and Robust SimulationStatistical Estimation and Performance Analysis Session Chair: Sara Shashaani (North Carolina State University) Distributionally Robust Logistic Regression with Missing Data Weicong Chen and Hoda Bidkhori (George Mason University) Program Track: Uncertainty Quantification and Robust Simulation Program Tags: Data Driven, Python Abstract AbstractMissing data presents a persistent challenge in machine learning. Conventional approaches often rely on data imputation followed by standard learning procedures, typically overlooking the uncertainty introduced by the imputation process. This paper introduces Imputation-based Distributionally Robust Logistic Regression (I-DRLR)—a novel framework that integrates data imputation with class-conditional Distributionally Robust Optimization (DRO) under the Wasserstein distance. I-DRLR explicitly models distributional ambiguity in the imputed data and seeks to minimize the worst-case logistic loss over the resulting uncertainty set. We derive a convex reformulation to enable tractable optimization and evaluate the method on the Breast Cancer and Heart Disease datasets from the UCI Repository. Experimental results demonstrate consistent improvements for out-of-sample performance in both prediction accuracy and ROC-AUC, outperforming traditional methods that treat imputed data as fully reliable. pdfWorst-case Approximations for Robust Analysis in Multiserver Queues and Queuing Networks Hyung-Khee Eun and Sara Shashaani (North Carolina State University) and Russell Barton (Pennsylvania State University) Program Track: Uncertainty Quantification and Robust Simulation Abstract AbstractThis study explores strategies for robust optimization of queueing performance in the presence of input model uncertainty. Ambiguity sets for Distributionally Robust Optimization (DRO) based on Wasserstein distance is preferred for general DRO settings where the computation of performance given the distribution form is straightforward. For complex queueing systems, distributions with large Wasserstein distance (from the nominal distributions) do not necessarily provide extreme objective values. Thus, the calculation of performance extremes must be done via an inner level of maximization, making DRO a compute-intensive activity. We explore approximations for queue waiting time in a number of settings and show how they can provide low-cost guidance on extreme objective values, allowing for more rapid DRO. Approximations are provided for single- and multi-server queues and queueing networks, each illustrated with an example. We also show in settings with small number of solution alternatives that these approximations lead to robust solutions. pdfRevisiting an Open Question in Ranking and Selection Under Unknown Variances Best Contributed Theoretical Paper - Finalist Jianzhong Du (University of Science and Technology of China), Siyang Gao (City University of Hong Kong), and Ilya O. Ryzhov (University of Maryland) Program Track: Analysis Methodology Program Tag: Ranking and Selection Abstract AbstractExpected improvement (EI) is a common ranking and selection (R&S) method for selecting the optimal system design from a finite set of alternatives. Ryzhov (2016) observed that, under normal sampling distributions with known variances, the limiting budget allocation achieved by EI was closely related to the theoretical optimum. However, when the variances are unknown, the behavior of EI was quite different, giving rise to the question of whether the optimal allocation in this setting was totally distinct from the known-variance case. This research solves that problem with a new analysis that can distinguish between known and unknown variance, unlike previously existing theoretical frameworks. We derive a new optimal budget allocation for this setting, and confirm that the limiting behavior of EI has a similar relationship to this allocation as in the known-variance case. pdf
Aviation Modeling and Analysis Track Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), Michael Schultz (Bundeswehr University Munich), John Shortle (George Mason University) AnyLogic, Distributed, Java, Simio, System Dynamics, Aviation Modeling and Analysis, Logistics, Supply Chain Management, TransportationAdvances in Aviation Modeling and Simulation Session Chair: Hauke Stolz (Technical University of Hamburg) A Synergistic Approach to Workforce Optimization in Airport Screening using Machine Learning and Discrete-Event Simulation Lauren A. Cravy and Eduardo Perez (Texas State University) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: Simio Abstract AbstractThis study explores the integration of machine learning (ML) clustering techniques into a simulation-optimization framework aimed at enhancing the efficiency of airport security checkpoints. Simulation-optimization is particularly suited for addressing problems characterized by evolving data uncertainties, necessitating critical system decisions before the complete data stream is observed. This scenario is prevalent in airport security, where passenger arrival times are unpredictable, and resource allocation must be planned in advance. Despite its suitability, simulation-optimization is computationally intensive, limiting its practicality for real-time decision-making. This research hypothesizes that incorporating ML clustering techniques into the simulation-optimization framework can significantly reduce computational time. A comprehensive computational study is conducted to evaluate the performance of various ML clustering techniques, identifying the OPTICS method as the best found approach. By incorporating ML clustering methods, specifically the OPTICS technique, the framework significantly reduces computational time while maintaining high-quality solutions for resource allocation. pdfDevelopment of a Library of Modular Components to Accelerate Material Flow Simulation in the Aviation Industry Hauke Stolz, Philipp Braun, and Hendrik Rose (Technical University of Hamburg) and Helge Fromm and Sascha Stebner (Airbus Group) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Java Abstract AbstractAircraft manufacturing presents significant challenges for logistics departments due to the complexity of processes and technology, as well as the high variety of parts that must be handled. To support the development and optimization of these complex logistics processes in the aviation industry, simulation is commonly employed. However, existing simulation models are typically tailored to specific use cases. Reusing or adapting these models for other aircraft-specific applications often requires substantial implementation and
validation efforts. As a result, there is a need for flexible and easily adaptable simulation models. This work aims to address this challenge by developing a modular library for logistics processes in aircraft manufacturing. The outcome of this work highlights the simplifications introduced by the developed library and its application in a real aviation warehouse. pdfUsing the Tool Command Language for a Flight Simulation Flight Dynamics Model Frank Morlang (Private Person) and Steffen Strassburger (Ilmenau University of Technology) Program Track: Aviation Modeling and Analysis Program Tags: Distributed, System Dynamics Abstract AbstractThis paper introduces a methodology for simulating flight dynamics utilizing the Tool Command Language (Tcl). Tcl, created by John Ousterhout, was conceived as an embeddable scripting language for an experimental Computer Aided Design (CAD) system. Tcl, a mature and maturing language recognized for its simplicity, versatility, and extensibility, is a compelling contender for the integration of flight dynamics functionalities. The work presents an extension method utilizing Tcl's adaptability for a novel type of flight simulation programming. Initial test findings demonstrate performance appropriate for the creation of human-in-the-loop real-time flight simulations. The possibility for efficient and precise modeling of future complicated distributed simulation elements is discussed, and recommendations regarding subsequent development priorities are drawn. pdf
Track Coordinator - Commercial Case Studies: Amy Greer (MOSIMTEC, LLC), Arash Mahdavi (Simuland.ai), Saurabh Parakh (MOSIMTEC, MOSIMTEC, LLC) Commercial Case StudiesIndustry Learning Session - Planning for Simulation Project Success Session Chair: Jennifer Cowden (BigBear.ai) Planning for Simulation Project Success Jennifer Cowden and David Tucker (BigBear.ai) Abstract AbstractDiscrete Event Simulation (DES) can be a powerful tool to help companies make better decisions. However, just one incorrect application of simulation can leave a bad taste with project sponsors and can derail future simulation endeavors. What are appropriate uses for simulation? What type of model do you need and how will that drive the data that should be collected? This presentation will help you get your project started on the right foot by giving you tips on what things to consider before kicking it off. pdfOptimizing CAD-Simulation Integration: An Automated Framework for Model Generation Rebecca Pires dos Santos, Gilles Guedia, and Abhineet Mittal (Amazon) Program Track: Commercial Case Studies Abstract AbstractThe integration of Computer-Aided Design (CAD) models into discrete event simulation software is a critical requirement for many simulation projects, particularly those involving the movement of people or vehicles where spatial accuracy directly impacts study outcomes. While importing CAD files and configuring simulation elements is essential for system accuracy, this process is typically time-consuming, prone to errors, and involves substantial repetitive tasks. Although previous attempts have been made to automate this workflow, the wide variety of CAD formats and lack of standardization pose significant challenges. This paper presents novel approaches for automating CAD import processes, specifically focusing on 2D drawings using the ezdxf Python library (ezdxf) and 3D models using Revit Python Shell (revitpythonshell). Our methods demonstrate potential time savings and error reduction in simulation model development while maintaining spatial accuracy. pdfFresh Food Strategy Simulation for a Large Convenience Store Chain Nelson Alfaro Rivas (MOSIMTEC, LLC) Program Track: Commercial Case Studies Abstract AbstractOne of North America’s largest convenience store chains launched a strategic initiative to expand fresh food offerings across all locations, requiring significant changes to store layouts, equipment, staffing, and operations. To support data-driven decision-making for these remodels, the client partnered with MOSIMTEC to develop a virtual twin of its most common store layout using AnyLogic simulation software. The simulation model, integrated with Excel and PowerBI, enabled stakeholders to evaluate store configurations, labor needs, equipment impacts, and throughput performance—ultimately guiding smarter investments in their Fresh Food program. pdf Commercial Case StudiesSimulation for Planning & Scheduling - Part 1 Session Chair: Jason Ceresoli (Simio LLC) Modeling For Dynamic Work Movement Bryson White and Jonathan Fulton (The Boeing Company) Program Track: Commercial Case Studies Abstract AbstractDynamic work movement is a transformative approach that allows tasks not fully completed at their designated position to transition seamlessly to subsequent positions within the workflow. This method ensures that delays are minimized and the entire process remains fluid and dynamic. It is particularly useful for tasks not entirely bound by their positional location, providing remarkable flexibility and adaptability in the workflow process. Facilitating the movement of jobs that are behind schedule to follow-on positions, dynamic work movement ensures that the production process remains on schedule, thereby enhancing overall efficiency and productivity. The implementation of dynamic work movement significantly improves the flexibility and adaptability of the production system. The streamlined process ensures continuous progress and minimizes delays, ultimately leading to a more efficient and effective production system. This approach also enhances the ability to handle variability and disruptions, ensuring that the production system can adapt to changing conditions and requirements. pdfDynamic Scheduling Model for a Multi-Site Snack Food Manufacturer: A Simio Digital Twin Implementation William Groves and Matthew Nixon (Argon & Co) and Adam Sneath (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study examines the implementation of a Simio-based digital twin scheduling model for a major Australian snack food manufacturer. The company faced significant scheduling challenges due to complex production constraints, including shared assets, intricate product routing, and the transition between multiple manufacturing sites. The previous Excel-based scheduling system could not adequately handle these complexities, resulting in suboptimal resource utilization and difficulty in scenario planning. The Simio model incorporated flow-based modeling techniques to represent fryer outputs and bagger draws, enabling dynamic scheduling that respected all production constraints while optimizing resource utilization. The implementation resulted in improved planning efficiency, better alignment between departments, enhanced capacity understanding, and support for the successful commissioning of a new manufacturing facility. This case demonstrates how simulation-based scheduling can address complex manufacturing environments with multiple interdependent constraints. pdfSimulation-based Workload Forecasting for Shipyard Block Assembly Operations Bongeun Goo (1HD Korea Shipbuilding & Offshore Engineering) Program Track: Commercial Case Studies Abstract AbstractThe shipbuilding industry faces challenges in simulation-based production forecasting due to custom design requirements, design-to-production variability, labor-intensive processes, and complex scheduling. Increasing labor shortages and cost pressures highlight the need for a data-driven operational framework. A discrete-event simulation model tailored for ship block assembly planning is presented, defining standardized work units and deployment criteria to forecast labor and equipment workload and schedule delays. The simulator significantly shortens planning and load analysis time, improves delay forecasting accuracy, and visualizes load distribution and bottleneck patterns through a user-friendly interface. These capabilities enhance management efficiency and decision-making confidence in practical shipyard operations. pdf Commercial Case StudiesIndustry Panel - Challenging Projects: Learn from Our Mistakes Session Chair: Nathan Ivey (Rockwell Automation Inc.) Industry Panel: Challenging Projects: Learn from Our Mistakes Nathan Ivey (Rockwell Automation Inc.); Amy Greer (MOSIMTEC, LLC); Benjamin Dubiel (D-squared Consulting Services); and Jaco-Ben Vosloo (The AnyLogic Modeler) Program Track: Commercial Case Studies Abstract AbstractThis lively and candid panel, Challenging Projects: Learn from My Mistakes, invites seasoned industry practitioners to share their most painful modeling missteps and project failures, along with the hard-earned lessons that came from them. Rather than focusing on polished success stories, panelists will reflect on what went wrong, why it happened, and what they would do differently with the benefit of hindsight. The goal is to broaden the conversation beyond individual case studies and extract practical wisdom from the real-world experiences of simulation professionals. New modelers will gain valuable insight into common pitfalls, unspoken challenges, and the importance of resilience and reflection in building a successful career. pdf Commercial Case StudiesSimulation in Manufacturing - Part 1 Session Chair: Jason Ceresoli (Simio LLC) Simulation Study with New Painting Technologies: Ink-jet printing & UV Curing and Painted Film Vacuum Forming Changha Lee (Sungkyunkwan University); Seog-Chan Oh and Hua-Tzu Fan (General motors R&D); and Junwoo Lim, Eun-Young Choi, and Sang Do Noh (Sungkyunkwan university) Program Track: Commercial Case Studies Abstract AbstractTraditional automotive painting, such as spray painting with oven drying, faces scalability challenges, high energy use, and significant environmental impacts. Emerging alternatives include ink‑jet printing with UV curing and painted‑film vacuum forming. To fully capitalize on these advanced technologies, feasibility assessments through advanced simulation and optimization techniques are essential at the design stage of manufacturing systems. This study focuses on two processes: (1) the ink-jet printing & UV light curing-based painting process and (2) the painted film vacuum forming painting process. We aim to identify potential operational issues and propose corresponding solutions to ultimately achieve desired throughputs. It is expected that the considerations presented regarding production operations for adopting advanced painting technologies will serve as a useful reference for practitioners in the automotive painting area. pdfHybrid Simulation and Learning Framework for Wip Prediction in Semiconductor Fabs Taki Eddine Korabi, Gerard Goossen, Abhinav Kaushik, Jasper van Heugten, Jeroen Bédorf, and Murali Krishna (minds.ai) Program Track: Commercial Case Studies Abstract AbstractThis paper presents a hybrid framework that combines discrete-event simulation (DES) with neural networks to forecast Work-In-Progress (WIP) in semiconductor fabs. The model integrates three learned components: a dispatching model, an inter-start time predictor, and a processing time estimator. These models drive a lightweight simulation engine that accurately predicts WIP across various aggregation levels. pdfScaling Metal Fabrication Production: A Simulation-Based Approach to Facility Design and Optimization Adam Sharman (LMAC Group) and Chiara Bondi and Adam Sneath (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study examines how LMAC Group utilized Simio simulation software to support a New Zealand metal fabrication manufacturer in scaling production from 600 to 26,000 units while maintaining onshore manufacturing and reducing unit costs. The simulation model enabled analysis of current production capacity, identification of constraints, and evaluation of mitigation strategies including shift pattern modifications and automation options. The approach demonstrated how simulation could inform capital investment decisions by providing data-driven insights before physical implementation, resulting in optimized worker allocation, improved machine utilization, and a comprehensive future state factory design. pdf Commercial Case StudiesSupply Chain Simulation in Amazon I Session Chair: Yunan Liu (North Carolina State University) INQUIRE: INput-aware Quantification of Uncertainty for Interpretation, Risk, and Experimentation Yujing Lin, Jingtao Zhang, Mitchell Perry, Xiaoyu Lu, Yunan Liu, and Hoiyi Ng (Amazon) Program Track: Commercial Case Studies Abstract AbstractStochastic discrete event simulation is a vital tool across industries. However, the high dimensionality and complexity of real-world systems make it challenging to develop simulations that accurately model and predict business metrics for users when faced with inaccurate input and model fidelity limitations. Addressing this challenge is critical for improving the effectiveness of industrial simulations. In this work, we focus on simulation output uncertainty, a crucial summary statistic for assessing business risks. We introduce a novel framework called INput-aware Quantification of Uncertainty for Interpretation, Risk, and Experimentation (INQUIRE). At the heart of INQUIRE, we develop a residual-based uncertainty prediction model driven by key input parameters. Then we incorporate a skewness-detection procedure for quantile estimation that provides risk assessment. To analyze how input parameters evolution influences simulation output uncertainty, we introduce a Shapley-value-based interpretation method. Additionally, our framework enables more efficient simulation-driven experimentation, enhancing strategic decision-making by providing deeper insights. pdfWho’s to Blame? Unraveling Causal Drivers in Supply Chain Simulations with a Shapley Value Based Attribution Mechanism Using Gaussian Process Emulator Hoiyi Ng, Yujing Lin, Xiaoyu Lu, and Yunan Liu (Amazon) Program Track: Commercial Case Studies Abstract AbstractEnterprise-level simulation platforms model complex systems with thousands of interacting components, enabling organizations to test hypotheses and optimize operations in a virtual environment. Among these, supply chain simulations play a crucial role in planning and optimizing complex logistics operations. As these simulations grow more sophisticated, robust methods are needed to explain their outputs and identify key drivers of change. In this work, we introduce a novel causal attribution framework based on the Shapley value, a game-theoretic approach for quantifying the contribution of individual input features to simulation outputs. By integrating Shapley values with explainable Gaussian process models, we effectively decompose simulation outputs into individual input effects, improving interpretability and computational efficiency. We demonstrate our framework using both synthetic and real-world supply chain data, illustrating how our method rapidly identifies the root causes of anomalies in simulation outputs. pdfSimulating Customer Wait-time Metrics in Nonstationary Queues: a Queue-based Conditional Estimator for Variance Reduction Yunan Liu and Ling Zhang (Amazon) Program Track: Commercial Case Studies Abstract AbstractWe propose a new method to efficiently simulate customer-averaged service metrics in nonstationary queueing systems with time-varying arrivals and staffing. Key delay-based metrics include the fraction of customers waiting no more than x minutes, average waiting time, and abandonment rate over a finite horizon. Traditional discrete-event simulation (DES) tracks individual customers, leading to complex implementation and high variance due to numerous random events. To address this, we introduce a queue-based estimator that conditions on the time-dependent queue-length process, significantly reducing variance and simplifying the implementation of the simulation. We further enhance this estimator using many-server heavy-traffic approximations to capture queue dynamics more accurately. Numerical results show our method is much more computationally efficient than standard DES, making it highly scalable for large systems. pdf Commercial Case StudiesIndustry Learning Session - Verification and Validation in the Real World Session Chair: Renee Thiesing (Promita Consulting) Verification and Validation in the Real World Renee Thiesing (Promita LLC) Program Track: Commercial Case Studies Abstract AbstractVerification and Validation (V&V) are foundational to building trust in simulation models, yet their application in practice frequently involves ambiguity and difficulty. This case study explores how simulation practitioners navigate the complexities of V&V across different industries. Drawing on academic frameworks, practitioner interviews and industry case studies, we examine the disconnect between textbook theory and real-world implementation. pdfGeoPops: an Open-source Package for Generating Geographically Realistic Synthetic Populations Alisa Hamilton (Johns Hopkins University); Alexander Tulchinsky (One Health Trust); Gary Lin (Johns Hopkins Applied Physics Laboratory); Cliff Kerr (Institute for Disease Modeling); Eili Klein (One Health Trust, Johns Hopkins University); and Lauren Gardner (Johns Hopkins University) Program Track: Commercial Case Studies Abstract AbstractSynthetic populations with spatially connected individuals are useful for modeling infectious diseases, particularly when assessing the impact of interventions on geographic and demographic subgroups. However, open-source tools that incorporate geospatial data are limited. We present a method to generate synthetic contact networks for any US region using publicly available data. GeoPops is an open-source package that 1) synthesizes a set of agents approximating US Census distributions, 2) assigns agents to homes, schools, and workplaces, and 3) connects agents within these locations. We compare GeoPops to two other synthesis tools that generate comparable network structures by creating Maryland populations with each package and assessing accuracy against US Census data. We simulate COVID-19 transmission on each population and compare simulations to observed data for the first wave of the pandemic. Our study highlights the utility of spatially connected synthetic populations and builds the capacity of modelers to better inform epidemic decision making. pdfTraining Enterprise Digital Twin: Optimizing Defense Contractor Training Operations through Simulation Justin Scarborough (Lockheed Martin Corporation) and Ryan Luttrell and Adam Sneath (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis paper examines Lockheed Martin’s implementation of a Training Enterprise Digital Twin system that revolutionizes military training operations management. Using Simio, Lockheed Martin created a comprehensive, data-driven digital model of a training enterprise to optimize resource allocation, forecast program performance, and support strategic decision-making. The simulation framework incorporates student progression through training pipelines alongside asset availability and maintenance requirements. Results demonstrate significant improvements in training efficiency via optimized training times, improved resource utilization, and substantial cost avoidance through optimized asset procurement. This case study illustrates how simulation technology enables defense contractors to meet performance-based contract requirements while maximizing return on investment in training operations. pdf Commercial Case StudiesSimulation for a Green New World Session Chair: Gerda Trollip (MOSIMTEC LLC) A "Data Digital Twin" Development to Enhance Thermal Estimation in Electric Traction Motors David Blum, Fauzan Dhongre, Alekh Kurup, Jaehoon Jeong, Anandakumar Subbiah, Vignesh Krishnakumar, Anuradha Kodali, Ram Surada, Vivek Sundaram, and Dhyana Ramamurthy (Lucid Motors) Program Track: Commercial Case Studies Abstract AbstractLucid Motors has begun a proof of concept which aims to formulate a motor thermal estimator using a modern time series transformer and integrate it into our current physics-based model, creating a “data digital twin” of the motor thermal system. Outputs include component-level heat and power loss estimates over time, which are important for tuning motor performance. Our goal with the data digital twin is to improve latency and increase the time step frequency for thermal estimation, which are currently limited by computational load and in turn limit the optimization and accuracy of control algorithms designed to react to the temperature gradients within the drive unit. pdfEfficient Manufacturing of Electrolyzer Cells for Green Hydrogen Production Stefan Galka (OTH Regensburg, IZPL) and Florian Schmid (OTH Regensburg) Program Track: Commercial Case Studies Abstract AbstractMany countries and companies are pursuing the goal of climate neutrality, which is increasing the importance of green hydrogen. However, due to high electricity and capital costs—particularly for electrolyzers—the production of green hydrogen remains expensive. As part of the StaR research project, a novel stack for alkaline electrolyzers was developed, along with a scalable production concept. To reduce investment risk, a semi-automated pilot production facility was established at the Dortmund site. The production processes and workforce allocation were modeled using a discrete-event simulation and analyzed with regard to productivity and resource assignment. A generic control logic enables flexible evaluation of various scenarios for work plan optimization. Real-world process data are continuously collected and integrated into the simulation model for ongoing updates. pdfFueling the Future: Digital Twin Implementation at Westinghouse Martin Franklin and Gerda Trollip (MOSIMTEC LLC) Program Track: Commercial Case Studies Abstract AbstractWestinghouse Electric Company partnered with MOSIMTEC to implement Simio-based digital twins, replacing fragmented, Excel-driven planning across five nuclear fuel production sites. The solution integrates data from SAP, IMS, and Excel via ETL processes to enable dynamic, capacity-constrained scheduling and real-time scenario analysis. Developed through a structured methodology—including functional specification, phased modeling, validation, and integration, the model supports multi-site planning, throughput analysis, material flow visualization, and resource utilization reporting. Planning cycles were reduced from weeks to hours, improving responsiveness to customer requests, outages, and shifting priorities. The project revealed data inconsistencies across systems, prompting quality improvements. Westinghouse planners now independently run and modify plans without external support, enabling faster decisions and improved cross-site coordination. The digital twins have established a scalable, data-driven framework that supports ongoing optimization and future expansion into other business units. This case highlights how simulation can enable operational agility in complex, high-regulation manufacturing environments. pdf Commercial Case StudiesSupply Chain Simulation in Amazon II Session Chair: Xiaoyu Lu (Amazon) Simulation-Based Online Retailer Supply Chain Inbound Node Arrival Capacity Control Michael Bloem, Song (Sam) Zhou, Kai He, Zhunyou (Jony) Hua, and Yan Xia (Amazon) Program Track: Commercial Case Studies Abstract AbstractOnline retailer supply chain management involves decisions about how much and when to buy inventory, where to inbound new inventory, how to transfer inventory between warehouses, and how to fulfill customer orders. These choices must adhere to capacity constraints, such as labor plans, while minimizing impacts on customer service and profitability. This paper presents a dual decomposition framework and simulation-based cost search approach that maintains high customer service levels while better aligning buying purchase orders with inbound node arrival capacity plans. The methodology is evaluated through end-to-end discrete event simulations, which quantify the impact of candidate capacity costs on buying system behavior, as well as the downstream effects on other systems and ultimately on customer service metrics. Results demonstrate this approach can improve capacity adherence by over 60% across a five-week horizon, with less than a 4% degradation in a metric measuring the proximity of inventory to customers. pdfRevolutionizing Order Fulfillment: A Neural Network Approach to Optimal Warehouse Selection in Supply Chain Simulation Weilong Wang, Michael Bloem, Jinxiang Gu, and Yunan Liu (Amazon) Program Track: Commercial Case Studies Abstract AbstractSimulation plays a central role in the strategic planning and operational evaluation of supply chain networks. Within these networks, order fulfillment traditionally requires solving computationally expensive optimization problems in real-time across multiple constraints. For forward-looking simulations evaluating millions of orders, such optimization becomes prohibitively expensive. We develop a neural network-based emulator that approximates optimal fulfillment decisions while maintaining millisecond-level inference speed. Operating at ZIP-code level resolution and incorporating shipping speed constraints, our model handles exponential decision spaces and non-stationary patterns. Empirical results demonstrate 56.75% order-level accuracy, a 20 percentage point improvement over benchmarks. Through novel regularization balancing order-level and network-level efficiency, we achieve 47.13% node-level accuracy while maintaining 50.31% order-level accuracy. Our model captures intricate patterns in historical fulfillment data, enabling efficient forward-looking simulation for strategic planning. pdfWhen Interpretability Meets Efficiency: Integrating Emulation into Supply Chain Simulation Xiaoyu Lu, Yujing Lin, Hoiyi Ng, and Yunan Liu (Amazon) Program Track: Commercial Case Studies Abstract AbstractFor large-scale retail businesses such as Amazon and Walmart, simulation is critical for forecasting
inventory, planning, and decision-making. Traditional digital-twin simulators, which are powered by complex optimization algorithms, are high-fidelity but can be computationally expensive and difficult to experiment with. We propose a hybrid simulator called SEmulate that integrates machine learning and emulation into large-scale simulation systems that follows the causal relationship between system components. SEmulate uses machine learning to emulate complex high-dimensional system components, while using physics-based modeling or heuristics for other simpler components. We apply SEmulate on supply chain simulation, where we demonstrate that SEmulate reduces runtime by orders of magnitude compared to the traditional digital-twin based simulator, while maintaining competitive accuracy and interpretability. The enhanced simulator enables efficient supply chain simulation, rapid experimentation, faster and improved decision-making process. pdf Commercial Case StudiesLogistics & Supply Chain Simulation - Part 1 Session Chair: Jason Ceresoli (Simio LLC) A Federated Simulation Framework for Modeling Complex Logistics: Amazon Sort Center Case Study Rahul Ramachandran, Hao Zhou, and Anish Goyal (Amazon) Program Track: Commercial Case Studies Abstract AbstractAmazon's sort centers represent highly complex systems where traditional monolithic simulation approaches fall short due to computational limitations, extended time-to-value, and challenging model maintenance. This case study presents a federated simulation framework that decomposes the sort center model into four key federates: inbound operations, main sortation and outbound operations, non-conveyable processing, and non-compliant item handling. Each federate operates independently while maintaining synchronization through a conservative time-stepping algorithm and structured message-passing protocol. Our implementation demonstrates significant advantages, including a 40% reduction in model development time, 15% improvement in model execution time, and improved scenario testing capabilities. This approach enables early optimization of sort center design, identification of cross-process bottlenecks, and better first-time launch quality. This real-world application showcases the practicality and benefits of federated simulation in modern logistics, offering valuable insights for practitioners modeling complex industrial systems. pdfSimulation-based Capacity Planning for Fleet Growth at Penske Truck Leasing Morgan Mistysyn (Penske Truck Leasing) and Adam Sneath (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study examines how Penske Truck Leasing utilized Simio simulation software to address capacity planning challenges associated with fleet growth. Facing the addition of 500 vehicles over five years to an already space-constrained facility, Penske’s Operational Excellence team developed a comprehensive simulation model to identify capacity constraints and evaluate potential solutions. The model analyzed multiple capacity dimensions including parking space, service bays, technician staffing, and support resources. Through simulation, Penske identified specific capacity ceilings, determined optimal timing for implementing various solutions, and provided facility managers with a data-driven roadmap for supporting growth while maintaining operational efficiency. This approach enabled Penske to make informed decisions about resource allocation, facility modifications, and staffing adjustments without the risks associated with real-world implementation. pdf Commercial Case StudiesSimulation for Planning & Scheduling - Part 2 Session Chair: Prashant Tiwari (Cargill Inc) Automated Block Arrangement Methodology for Shipyard Assembly Operations minsu lee (HD Korea Shipbuilding) Program Track: Commercial Case Studies Abstract AbstractIn the shipbuilding industry, ships are constructed from large, prefabricated blocks on fixed assembly workspace within a factory. The efficiency of the entire production schedule is highly dependent on how these blocks are arranged, as each has a unique geometry and a specific construction timeline. This study addresses this complex challenge by developing an optimization algorithm for block arrangement. The proposed method utilizes the No-Fit Polygon (NFP) to handle complex geometric constraints and employs Simulated Annealing (SA) to find a near-optimal arrangement that enhances overall workflow efficiency. pdfStrategic Production Transformation Through Simulation- Driven Decision Support Prashant Tiwari (Cargill Inc) Program Track: Commercial Case Studies Abstract AbstractTo lead in delivering safer, premium-quality food ingredients, one of our plants is undergoing a strategic transformation. This initiative reflects a forward-thinking commitment to innovation and excellence, not just regulatory alignment. By transitioning to a new class of high-quality products, we are enhancing safety, expanding our portfolio, and positioning ourselves to better serve evolving market needs. This shift introduces operational complexities - frequent product changeovers, tighter inventory control, and outbound capacity constraints. To address these, we developed a robust simulation model, offering comprehensive view of plant operations. The model enables us to test new configurations, quantify cost impacts, and proactively identify supply risks. Beyond immediate improvements, the simulation supports smarter infrastructure investments by evaluating storage expansion and modeling demand growth scenarios. This data-driven approach strengthens long-term flexibility and resilience, ensuring our operations remain agile and aligned with future market dynamics. pdf Commercial Case StudiesIndustry Learning Session - Using Project Objectives to Drive Simulation Quality Control Session Chair: Andrey Malykhanov (Amalgama, MineTwin) Using Project Objectives to Drive Simulation Quality Control Michael Allen (Hindsight Consulting, Inc.) Program Track: Commercial Case Studies Abstract AbstractHow do we know if the results of a simulation analysis are useful, accurate, reliable and credible? This is an important and fundamental question. If a simulation produces invalid results, yet the client accepts and acts upon them, then they’re heading into a world of pain. Just as bad is the situation in which the results are valid but are distrusted by the client and so are disregarded. In both cases, the outcomes can be catastrophic for the client. If the model is invalid, and the client rightly rejects it, then we’ve wasted our time and dented our credibility. The usefulness, accuracy and reliability of a simulation model—and the client’s trust in its results—are key factors for a successful outcome; this presentation will demonstrate how project objectives should drive the model’s content, its verification & validation, and experimentation. pdfSimulating the Feasibility of the South Crofty Mine Restart Plan Andrey Malykhanov (Amalgama Software Design) and Jaco-Ben Vosloo (MineTwin, Amalgama Software Design) Program Track: Commercial Case Studies Abstract AbstractThis study uses discrete-event simulation in MineTwin to evaluate the feasibility of the strategic restart plan for South Crofty, a historic underground tin mine in Cornwall, UK. The original plan, developed with a static scheduling tool, did not capture equipment interactions, queuing delays, or operational variability. To overcome these limitations, we built a simulation model incorporating detailed mine layouts, production schedules, equipment specifications, and equipment coordination rules. The simulation revealed several constraints not identified in the static plan, including the need for additional loaders and insufficient surface haulage capacity. The simulation also enabled level-specific equipment allocation, which proved critical in pre-production phases with limited level connectivity. Overall, simulation complemented static planning by revealing dynamic bottlenecks and improving confidence in the operational feasibility of the restart plan. pdfSimulation-Based Optimization of Resident-Driven Healthcare Clinic Operations at Emory Healthcare Dontavious Gaston (Emory Healthcare) and Mani Suresh (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study examines the application of discrete-event simulation to optimize operations at Emory Healthcare’s Dunwoody Family Medicine clinic, a resident-driven healthcare facility. The clinic faced challenges with patient wait times and complex resident-preceptor interactions that impacted patient flow. Using Simio simulation software, researchers developed a digital twin model of clinic operations, incorporating data from electronic health records and time studies. The validated model identified key bottlenecks and tested multiple improvement scenarios. Results showed that implementing a first-come-first-served preceptor queue system could reduce preceptor waiting time by 31%, while strategic resident pod assignments could decrease travel time by 60%. This project demonstrates how simulation modeling can provide healthcare facilities with data-driven insights to improve operational efficiency while maintaining educational quality in teaching environments. pdf Commercial Case StudiesSimulation for Material Handling Systems - Part 1 Session Chair: Justin Rist (Pennsylvania State University, Simio LLC) Evaluating Crane Capacity and Avoiding Capital Costs: a Simulation Case Study at Novelis Zach Buran, Abdurrahman Yavuz, and Chirag Agrawal (Novelis) Program Track: Commercial Case Studies Abstract AbstractIn complex aluminum manufacturing systems, optimizing interdependencies is critical to avoiding unnecessary capital investment and costly inefficiencies. At Novelis remelt and recycling plants, crane operations are central to metal movement and production flow, creating challenges when operational capacity is limited. To address this, the Novelis simulation team developed a 3D flow model in AnyLogic that mirrors crane operations, incorporating factors like processing times, schedules, and downtime. The innovative simulation combined real-world data with dynamic modeling to evaluate crane capacity under varying conditions. Unlike static methods, the cloud-deployed model featured an intuitive interface, allowing plant teams to test scenarios and identify capacity thresholds. Initial results showed the existing crane could meet increased production goals through reliability improvements and staffing changes, avoiding multimillion-dollar capital expenditure. The extended abstract will outline the simulation methodology, model architecture, interface features, and outcomes that enabled the plant’s successful adoption of the tool. pdfFlexible Simulation for Optimizing Seegrid’s Autonomous Mobile Robot Solutions Sydney Schooley, Abby Perlee, and Sofia Panagopoulou (Seegrid Corporation) and Adam Sneath (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study examines how Seegrid, a leading manufacturer of autonomous mobile robot (AMR) solutions, partnered with Simio to develop a comprehensive discrete event simulation template model that reliably represents Seegrid AMR solutions, enabling better visualization of vehicle interactions, congestion reduction, and optimal fleet sizing. Simio’s template-based approach enables Seegrid’s large team of skilled Application Engineers to quickly and accurately build models of customer automated workflows. The collaboration demonstrates how simulation can transform solution design for modern industrial autonomous material handling, providing both immediate operational insights and a foundation for future innovation. pdfSimulating Inventory Slotting Strategies in a Robotic Mobile Fulfillment Center Justin Rist (Pennsylvania State University, Simio LLC) and Paul Griffin (Pennsylvania State University) Program Track: Commercial Case Studies Abstract AbstractThis case study presents a discrete event simulation model of a robotic mobile fulfillment center (RMFC) built in Simio. The model evaluates how different inventory slotting strategies affect operational performance at three facility scales: a small, medium, and large system. Inspired by a shoe fulfillment center, the model features standardized shelf sizes and a large number of SKUs due to variations in shoe models, sizes, and colors. A synthetic dataset was created to reflect realistic order volumes and SKU diversity without relying on proprietary data. Key performance metrics include order cycle time, pile-on, and AGV travel distance. The goal is to demonstrate how simulation enables robust evaluation of storage strategies in variable-rich RMFC environments. This work provides a foundation for comparing slotting methods and exploring how their effectiveness changes as facility size scales. pdf Commercial Case StudiesIndustry Panel - Simulation in Industry: Past, Present, & Future Session Chair: Michael Allen (Hindsight Consulting, Inc.) Industry Panel: Simulation in Industry: Past, Present, & Future Mike Allen (Hindsight Consulting Inc.), Ganesh Nanaware (Amazon), Jonathan Fulton (Boeing), and Andrey Malykhanov (Amalgama Software Design) Program Track: Commercial Case Studies Abstract AbstractThis panel, Simulation in Industry: Past, Present, & Future, takes a wide-angle view of the barriers that continue to limit the use of simulation for major capital expenditure (Capex) decisions, with a focus on sectors like manufacturing, material handling and automation. Rather than spotlighting individual case studies, the discussion will explore systemic and cultural obstacles that slow adoption e.g. such as organizational inertia, messaging gaps, and misaligned expectations between technical teams and business leaders. In addition to diagnosing current challenges, panelists will offer reflections on how industrial simulation has evolved, where it stands today, and what changes are needed to ensure it plays a more central role in decision-making by 2050. This session invites the community to think critically about how we frame the value of simulation and how we must evolve to meet the future. pdf Commercial Case StudiesSimulation for Healthcare Session Chair: Maryam Hosseini (EwingCole) Designing Scalable Cell Therapy Processes: a Framework for Simulation, Bottleneck Analysis, and Scale-up Evaluation Maryam Hosseini (EwingCole) Program Track: Commercial Case Studies Abstract AbstractCell therapy manufacturing presents unique challenges due to its high complexity, stringent regulatory requirements, and patient-specific variability. To enhance production efficiency and ensure robust scalability, simulation modeling emerges as an effective strategy to evaluate and refine manufacturing processes without disrupting ongoing operations. This study introduces a comprehensive framework that integrates discrete-event simulation (DES) to model critical cell therapy manufacturing stages—including inbound receipt and verification, QC sampling, material and kit-building, media fill process, manufacturing, cryopreservation, and outbound logistics. The framework not only captures resource constraints, batch scheduling, equipment utilization, and potential failure modes but also identifies the root causes of process bottlenecks. Further, it provides a roadmap for scenario design and scale-up analysis, enabling systematic sensitivity evaluations to quantify impacts on throughput, cost, and lead time. The findings underscore the framework’s potential to support capacity planning, workforce allocation, and quality risk management, thereby accelerating commercial readiness and improving patient access. pdfModeling Successful Infusion Strategies Jim Montgomery (HonorHealth) Program Track: Commercial Case Studies Abstract AbstractThis case study explores the use of Discrete Event Simulation (DES) to optimize resource planning and patient flow management in a high-volume oncology infusion center anticipating a five-year surge in demand. Conducted at HonorHealth Cancer Center, the simulation model, built with MedModel software (BigBear.ai) incorporated empirical data and subject matter expertise to evaluate several strategic initiatives, including extended hours of operation, inter-clinic chair sharing, optimized patient and pharmacy hood scheduling, and front office staffing adjustments. The five-year planning scenarios revealed that, without intervention, the center would experience severe capacity limitations. However, the integration of targeted improvements significantly enhanced chair availability, reduced patient queue times, and increased overall operational efficiency. This study highlights the transformative role of DES as a decision-support tool, underscoring the importance of a DES consultant to join facility planning and design teams. pdf Commercial Case StudiesIndustry Learning Session - How and When to Scope and Simplify Industry Simulations Session Chair: Nathan Ivey (Rockwell Automation Inc.) How and When to Scope and Simplify Industry Simulations Nathan Ivey, Melanie Barker, and Nancy Zupick (Rockwell Automation Inc.) Program Track: Commercial Case Studies Abstract AbstractIn simulation projects, model complexity often competes with limited resources, tight timelines, and stakeholder expectations. Effective scoping is essential to ensure efforts are focused on the right questions and the appropriate level of detail. A well-defined scope guides decisions on system boundaries, performance metrics, modeling approaches, and abstraction levels thereby supporting sound analysis while protecting the modeler from scope creep and misaligned expectations. Within this framework, the ability to make thoughtful simplifying assumptions becomes a critical modeling skill. When carefully applied, these assumptions preserve model validity and decision-making utility while improving development speed, stakeholder communication, and model ownership. This presentation shares practical strategies for scoping simulation projects and applying assumptions effectively, illustrated through real-world examples. pdfSafety and Productivity Validation through Fast Iterative Discrete-Event Simulation with a Lightweight Industrial Simulator Takumi Kato and Zhi Hu (Hitachi America, Ltd.) and Rafael Suarez (Hitachi Rail USA, Inc.) Program Track: Commercial Case Studies Abstract AbstractEnhancing operational efficiency while ensuring worker safety is a key objective in industrial system design. It is particularly important in environments where human workers and forklifts share the same physical space. Rolling stock factories often include warehouse areas that manage both large and small parts, relying on a combination of workers and forklifts to perform material transport. These warehouses play a critical role in supporting production by executing kitting operations, which represent the initial step in the assembly process and demand significant coordination and time management. This study employs discrete-event simulation (DES) to evaluate the efficiency and safety of various warehouse layout and process configurations prior to implementation. A lightweight industrial simulator, built on the Rapid Modeling Architecture (RMA), enabled rapid iterations of simulation-based analyses and supported effective decision-making before physical changes on the factory floor. pdfDigital Twin-based Reinforcement Learning for Optimization of Lot Merging Efficiency in Final Test Operations of Semiconductor Productions YOUNGTAEK SEO (Sungkyunkwan University, Samsung Electronics) and SANGDO NOH (Sungkyunkwan University) Program Track: Commercial Case Studies Abstract AbstractThe final test operation in semiconductor production, which verifies the electrical functionality of packaged chips, demands highly responsive logistics and equipment control strategies to meet delivery, ensure quality, and accommodate urgent requests. In complex production environments involving diverse products and many different process constraints, optimizing both lot merging efficiency and equipment utilization becomes more and more critical. This paper presents an integrated framework that combines digital twin simulation with deep reinforcement learning, a Rainbow DQN agent, to enhance success rates of lot merging and improve throughput. While preserving the existing MES-driven dispatch logic, the framework introduces a learning-based policy that leverages equipment characteristics—particularly the constraint that higher-parallelism tools have narrower merging time windows. The agent learns to dynamically adjust lot-to-tool assignments, reducing resource waste and idle time, while supporting more efficient and predictable production scheduling. pdf Commercial Case StudiesSimulation in Manufacturing - Part 2 Session Chair: Chris Tonn (Spirit AeroSystems) Simulation-based Auto-generated PLC Program Validation for Establishing Autonomous Manufacturing in the Automotive Sector Jun Yong So, Joong Ho Nam, and Sung Sik Lee (Hyundai Motor Company); Myeong Jin Seo (KIA); and Deog Hyeon Kim (Hyundai Motor Company) Program Track: Commercial Case Studies Abstract AbstractAs automotive manufacturing systems grow in complexity, the demand for efficient and dependable PLC programming continues to rise. Conventional manual programming is reliant on individual expertise so that leading to many points of human error. This study introduces a framework that automates PLC code generation using flowchart-based logic and validates the output through simulation. The approach translates flowcharts into IEC 61131-3 compliant ladder logic via domain-specific language and verifies functionality using 3D simulation environments integrated with OPC UA protocols. This method improves consistency, shortens development cycle of control logic, and reduces programming errors. Its effectiveness and scalability are demonstrated through a real-world application in an electric vehicle body shop process. pdfUsing Discrete Event Simulation to Evaluate the Impact of Layouts and Resource Allocation in Packaging Systems Stephen Wilkes, Shannon Browning, and Zary Peretz (The Haskell Company) Program Track: Commercial Case Studies Abstract AbstractA manufacturer was considering a temporary operation in a constrained space. Simple calculations showed a possible bottleneck at the dock doors. An alternative was to build a connector to an adjacent warehouse, at significant capital expense, to improve flexibility and throughput. A discrete-event simulation was built to evaluate the benefit of investment in the connector. The model demonstrated the material handling constraint and that the connector had payback potential. pdfPaint Facility Sizing for Commercial Aerospace Manufacturing Chris Tonn and Randy Allenbach (Spirit AeroSystems Inc.) Program Track: Commercial Case Studies Abstract AbstractDemand for an established paint facility is expected to change significantly due to legacy product phasing out overtime and the introduction of a new product. The new product is physically larger, changing the rate parts can flow through the paint facility. Two primary questions were asked: 1) Can existing paint facilities meet future demand, and if not, when will demand exceed capacity? 2) If new paint booth technology were deployed, how much paint facility will be required? Two discrete event simulation models were developed to answer each question. The current state model played a primary role in identifying that demand would exceed capacity before a new system could be installed and then quantified the impact of implementing a provisional booth. The future state model quantified the resources required in various demand profiles and equipment configurations to ensure proper throughput and process lead-times. pdf Commercial Case StudiesIndustry Learning Session - Simulation and AI Session Chair: Andrei Borshchev (The AnyLogic Company) Simulation of Amazon Inbound Cross Dock Facilities Sumant Joshi, Howard Tseng, Anish Goyal, and Ganesh Nanaware (Amazon) Program Track: Commercial Case Studies Abstract AbstractDiscrete Event Simulation (DES) frameworks have become essential tools in Amazon’s logistics network for analyzing and optimizing warehouse operations. This paper presents a modular DES framework developed using FlexSim to model operations across multiple Amazon IXD facilities. Validated against historical data, the model accurately replicates operational dynamics and supports applications span from current building optimization, particularly in cross-belt sorter recirculation parameters, to testing future automation concepts. Key challenges including human operational behavior and unexpected operational events are discussed. The study concludes by highlighting future directions in emulation and network model connectivity to enhance simulation fidelity and enable real-time data exchange across the logistics network. pdfSimulation and AI Andrei Borshchev (The AnyLogic Company) Program Track: Commercial Case Studies Abstract AbstractThis presentation aims to provide guidance on navigating the technology stack of digital transformation — with a focus on the complex, multifaceted relationship between Simulation and AI. We’ll discuss three key topics: 1) Why hasn’t simulation, as a training environment for machine learning, taken off at scale? 2) Can AI (and LLMs in particular) replace simulation? 3) How does AI affect the simulation model lifecycle, and how can it assist the modeler? We will also touch on the evolving requirements today’s industry places on simulation technologies. pdf Commercial Case StudiesSimulation for Material Handling Systems - Part 2 Session Chair: Robert Backhouse (Ocado Technology) A Simulation-based Vehicle Fleet Sizing Procedure in Automated Material Handling Systems: An Application to a Display Fab Gwangheon Lee (Pusan National University); Junheui Lee, Kyunil Jang, and Woonghee Lee (Samsung Display); and Soondo Hong (Pusan National University) Program Track: Commercial Case Studies Abstract AbstractThis study proposes a simulation-based optimization framework and its application to vehicle fleet sizing in automated material handling systems (AMHSs). The framework comprises two stages: simulation calibration and optimization. The simulation calibration adjusts the simulation input parameters using field observations to obtain a more reliable simulation model. The simulation optimization determines the best number of vehicles to minimize flowtime under performance constraints based on the calibrated simulation. We validated the framework through application to a display fabrication plant in South Korea. The framework effectively modulates the fleet size in response to material flow and exhibits stable performance in the real-world AMHS. pdfAGV Optimization Simulation: How Dijitalis Saved $1.5 Million in Electronics Manufacturing Tolgahan Tarkan and Tolga Yanaşık (Dijitalis) and Paul Glaser (Simio LLC) Program Track: Commercial Case Studies Abstract AbstractThis case study presents how Dijitalis Consulting utilized simulation modeling to optimize Automated Guided Vehicle (AGV) investments for a major electronics manufacturing facility. The client, a TV manufacturing company with 15 assembly lines, was initiating significant investments including a new AGV fleet to replace their outdated system of 132 vehicles. Using Simio simulation software, Dijitalis created a comprehensive digital model of the facility’s material handling operations, analyzing traffic patterns, potential bottlenecks, and resource utilization. The simulation revealed that only 95 AGVs were required instead of the initially proposed 132, resulting in capital expenditure savings exceeding $1.5 million. Beyond cost savings, the simulation model became a continuous improvement tool for testing layout modifications, process changes, and production schedule feasibility. This case demonstrates how simulation-based decision making can prevent overinvestment while ensuring operational requirements are met. pdfA Product Digital Twin: How Simulation Evolved Alongside the World's Most Advanced Grocery Picking Robot Robert Backhouse (Ocado Technology) Program Track: Commercial Case Studies Abstract AbstractOcado Technology developed On-Grid Robotic Pick (OGRP) - a robotic picking solution for grocery fulfillment centers - with coordinated innovation across hardware, software, and system design. To support this development, we built a low-fidelity simulation model early and evolved it alongside the product. As OGRP matured, assumptions from each area were progressively integrated into the model, while simulation insight shaped design decisions - forming a “product digital twin”.
We describe how this approach of co-evolving the simulation supported exploration of a large and uncertain design space - aligning development streams, guiding rapid iteration, and enabling early development - ultimately driving faster delivery with lower risk.
OGRP is now deployed around the world, picking over a million items per week and continuing to scale. This case study illustrates how evolving, system-level simulations can drive coordination in complex, fast-changing engineering projects. pdf Commercial Case StudiesIndustry Learning Session - Miscommunication in Simulation: Same Terms, Different Models Session Chair: Amy Greer (MOSIMTEC, LLC) Miscommunication in Simulation: Same Terms, Different Models Amy Greer (MOSIMTEC, LLC) and Jiaxi Zhu (Google) Program Track: Commercial Case Studies Abstract AbstractThe simulation industry has its own terms that are often used inconsistently between professionals. For example, much time is spent trying to define what makes a digital twin different from a simulation model. Inconsistent use of simulation terminology often leads to project dysfunctions, such as scope creep, misaligned expectations and solutions that fail to meet business objectives.
In this presentation, we will present a wide range of terms representing the most common modeling techniques. The focus is to provide simulation professionals with varying definitions of the same term, so they can be prepared to ask the right questions to ensure project success. While this presentation focuses on clarifying framework terminology more than ideal framework selection, attendees will gain a better understanding of various frameworks being used in simulation modeling. pdfArchitectural Considerations for Data Entry in Sustainment Modeling Kenneth Rebstock (Systecon North America) Program Track: Commercial Case Studies Abstract AbstractThis extended abstract examines the architectural considerations for data entry in sustainment Modeling and Simulations (M&S). It highlights the necessity of having a flexible yet robust data architecture that can accommodate various data types, including early design engineering estimates, test data, and post-deployment maintenance data, which often come in different formats. By establishing a data structure that facilitates efficient extraction, transformation, and loading from multiple sources, sustainment models can be quickly developed using the best available information.
A case study involving a commercial off-the-shelf fixed-wing aircraft—used and maintained by the U.S. military for decades—demonstrates the architecture’s adaptability across the system lifecycle. The new M&S framework supports dynamic model design and integration with evolving data inputs, enabling scalable, timely, and insightful trade-off analyses that meet the tight schedules encountered by program managers. pdf Commercial Case StudiesLogistics & Supply Chain Simulation - Part 2 Session Chair: Nanaware Ganesh (Amazon) Modular Physics Simulation Approach for Amazon Fulfillment Facilities Sumant Joshi, Gandhi Chidambaram, Anish Goyal, and Ganesh Nanaware (Amazon) Program Track: Commercial Case Studies Abstract AbstractPhysics simulation model have become essential tools in Amazon’s fulfillment network for analyzing and optimizing material handling equipment (MHE) design. This paper presents a modular framework developed using Emulate3D to model MHE across multiple Amazon fulfillment facilities. The framework incorporates three main components: material to be handled module, material flow control module and package chute module. Through comprehensive validation against pilot results and statistical analysis, the model demonstrates high fidelity in replicating real-world operational dynamics. The framework’s applications span from future automated conveyance flow control logic optimization to existing chute design retrofits. The physics model is used to identify chute jams before launch, location of sensors, optimal jam free merge logic, reducing time/cost for development. Key challenges including package softness, human operator behavior representation, and unexpected operational events are discussed. The study concludes by highlighting future directions in physics simulation to enhance simulation fidelity. pdfStrategic and Tactical Selection of Hubs for Rail-Based Metal Scrap Collection Ralf Elbert, Paul Bossong, Raphael Hackober, Ren Kajiyama, and Aylin Altun (Technische Universität Darmstadt) Program Track: Commercial Case Studies Abstract AbstractThe steel industry’s transformation toward sustainable production leads to an increased use of steel scrap to produce recycled steel. This necessitates a redesign of rail-based supply chains that have been historically designed to serve steel production from coal and iron ore. For a large German rail freight company, our study investigates which shunting yards should potentially be designated as hubs that bundle steel scrap transports. We tackle the complexity of the resulting logistics system with an agent-based simulation model to evaluate the performance of various hub configurations and demand scenarios from a strategic and tactical perspective. Our results show that a smaller number of hubs significantly reduces transport times and operating costs. However, this efficiency comes with increased vulnerability to disruptions, highlighting a trade-off between cost-efficiency and robustness. Our case study offers actionable insights into the efficient and sustainable design of commercial steel scrap transport networks. pdf
Complex and Generative Systems Track Coordinator - Complex and Generative Systems: Saurabh Mittal (MITRE Corporation), Claudia Szabo (The University of Adelaide, University of Adelaide) Complex Systems, Complex and Generative Systems, Manufacturing and Industry 4.0Digital Twins in Manufacturing Session Chair: Guodong Shao (National Institute of Standards and Technology) Characterizing Digital Factory Twins: Deriving Archetypes for Research and Industry Jonas Lick and Fiona Kattenstroth (Fraunhofer Institute for Mechatronic Systems Design IEM); Hendrik Van der Valk (TU Dortmund University); and Malte Trienens, Arno Kühn, and Roman Dumitrescu (Fraunhofer Institute for Mechatronic Systems Design IEM) Program Track: Manufacturing and Industry 4.0 Program Tag: Complex Systems Abstract AbstractThe concept of the digital twin has evolved to a key enabler of digital transformation in manufacturing. The adoption of digital twins for factories or digital factory twins remain fragmented and often unclear, particularly for small and medium-sized enterprises. This study addresses this ambiguity by systematically deriving archetypes of digital factory twins to support clearer classification, planning, and implementation. Based on a structured literature review and expert interviews, 71 relevant DFT use cases were identified. The result of the conducted cluster analysis is four distinct archetypes: (1) Basic Planning Factory Twin, (2) Advanced Simulation Factory Twin, (3) Integrated Operations Factory Twin, and (4) Holistic Digital Factory Twin. Each archetype is characterized by specific technical features, data integration levels, lifecycle phases, and stakeholder involvement. pdfDistributed Hierarchical Digital Twins: State-of-the-Art, Challenges and Potential Solutions Aatu Kunnari and Steffen Strassburger (Technische Universität Ilmenau) Program Track: Manufacturing and Industry 4.0 Abstract AbstractDigital Twins (DT) provide detailed, dynamic representations of production systems, but integrating multiple DTs into a distributed ecosystem presents fundamental challenges beyond mere model interoperability. DTs encapsulate dynamic behaviors, optimization goals, and time management constraints, making their coordination a complex, unsolved problem. Moreover, DT development faces broader challenges, including but not limited to data consistency, real-time synchronization, and cross-domain integration, that persist at both individual and distributed scales. This paper systematically reviews these challenges, examines how current research addresses them, and explores their implications in distributed, hierarchical DT environments. Finally, it presents preliminary ideas for a structured approach to orchestrating multiple DTs, laying the groundwork for future research on holistic DT management. pdfSynergic Use of Modelling & Simulation, Digital Twins and Large Language Models to Make Complex Systems Adaptive and Resilient Souvik Barat, Dushyanthi Mulpuru, Himabindu Thogaru, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd) Program Track: Complex and Generative Systems Abstract AbstractModeling and Simulation (M&S) has long been essential for decision-making in complex systems due to its ability to explore strategic and operational alternatives in a structured and risk-free manner. The emergence of Digital Twins (DTs) has further enhanced this by enabling real-time bidirectional synchronization with physical systems. However, constructing and maintaining accurate and adaptive models and DTs remains time- and resource-intensive and requires deep domain expertise. In this paper, we introduce an adaptive Decision-Making Framework (DMF) that integrates Large Language Models (LLMs) and Model-Driven Engineering (MDE) into the M&S and DT pipeline. By leveraging LLMs as proxy experts and synthesis aids, and combining them with MDE to improve reliability, our framework reduces manual effort in model construction, validation and decision space exploration. We present our approach and discuss how it improves agility, reduces expert dependency and can act as a pragmatic aid for making enterprise robust, resilient and adaptive. pdf
Data Science and Simulation Track Coordinator - Data Science for Simulation: Abdolreza Abhari (Toronto Metropolitan University), Cheng-bang Chen (University of Miami), Niclas Feldkamp (Ilmenau University of Technology) AnyLogic, Data Analytics, Python, Data Science and SimulationData Analytics Session Chair: Emma Von Hoene (George Mason University) Leveraging OpenStreetMap Information to Identify Cluster Centers in Aggregated Movement Data Maylin Wartenberg and Luca Marco Heitmann (Hochschule Hannover) and Marvin Auf der Landwehr (FehrAdvice & Partners AG) Program Track: Data Science and Simulation Program Tags: AnyLogic, Data Analytics Abstract AbstractAggregated movement data is widely used for traffic simulations, but privacy constraints often limit data granularity, requiring the use of centroids as cluster representatives. However, centroids might locate cluster centers in contextually irrelevant areas, such as an open field, leading to inaccuracies. This paper introduces a method that leverages an aggregation of points of interest (POIs) such as bus stops or buildings from OpenStreetMap as cluster centers. Using trip data from a suburban region in Germany, we evaluate the spatial deviation between centroids, POIs, and real trip origins and destinations. Our findings show that POI-based centers reduce spatial deviation by up to 46% compared to centroids, with the greatest improvements in rural areas. Furthermore, in an agent-based mobility simulation, POI-based centers significantly reduced travel distances. These results demonstrate that POI-based centers offer a context-aware alternative to centroids, with significant implications for mobility modeling, urban planning, and traffic management. pdfBeyond Co-authorship: Discovering Novel Collaborators With Multilayer Random-Walk-Based Simulation in Academic Networks Best Contributed Theoretical Paper - Finalist Siyu Chen, Keng Hou Leong, and Jiadong Liu (Tsinghua University); Wei Chen (Tsinghua University, Tencent Technology (Shenzhen) Co. LTD.); and Wai Kin Chan (Tsinghua University) Program Track: Data Science and Simulation Program Tags: Data Analytics, Python Abstract AbstractAcademic collaboration is vital for enhancing research impact and interdisciplinary exploration, yet finding suitable collaborators remains challenging. Conventional single-layer random walk methods often struggle with the heterogeneity of academic networks and limited recommendation novelty. To overcome these limitations, we propose a novel Multilayer Random Walk simulation framework (MLRW) that simulates scholarly interactions across cooperation, institutional affiliation, and conference attendance, enabling inter-layer transitions to capture multifaceted scholarly relationships. Tested on the large-scale SciSciNet dataset, our MLRW simulation framework significantly outperforms conventional random walk methods in accuracy and novelty, successfully identifying potential collaborators beyond immediate co-authorship. Our analysis further confirms the significance of institutional affiliation as a collaborative predictor, validating its inclusion. This research contributes a more comprehensive simulation approach to scholar recommendations, enhancing the discovery of latent practical collaborations. Future research will focus on integrating additional interaction dimensions and optimizing weighting strategies to further improve diversity and relevance. pdf Data Driven, Python, Rare Events, Sampling, System Dynamics, Variance Reduction, Data Science and SimulationMachine Learning Session Chair: Abdolreza Abhari (Toronto Metropolitan University) When Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach Ruoning Zhao and Xinyun Chen (The Chinese University of Hong Kong, Shenzhen) Program Track: Data Science and Simulation Program Tags: Rare Events, Sampling, Variance Reduction Abstract AbstractDriven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods. pdfExploring Integration of Surrogate Models Through A Case Study on Variable Frequency Drives Dušan Šturek (Karlsruhe Institute of Technology, Danfoss Power Electronics and Drives) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Data Science and Simulation Program Tags: Data Driven, System Dynamics Abstract AbstractHigh-fidelity simulation models of variable frequency drives often incur expensive computation due to high granularity, complex physics and highly stiff components, hindering real-time Digital Twin Industry 4.0 applications. Surrogate models can outperform simulation solvers by orders of magnitude, potentially making real-time virtual drives feasible within practical computational limits. Despite this potential, current surrogate models suffer from limited generalizability and robustness. In this paper, we present an industrial case study exploring the combination of deep learning with surrogate modeling for simulating variable frequency drives, specifically replacing the induction motor high-fidelity component. We investigate the performance of Long-Short Term Memory-based surrogates, examining how their prediction accuracy and training time vary with synthetic datasets of different sizes, and how well the induction motor surrogates generalize across different motor resistances. This initial study aims to establish a foundation for further development, benchmarking and automation of surrogate modeling workflow for simulation enhancement. pdfGenerating Artificial Electricity Data For Monitoring Energy Consumption In Smart Cities Sina Pahlavan, Wael Shabana, and Abdolreza Abhari (Toronto Metropolitan University) Program Track: Data Science and Simulation Program Tag: Python Abstract AbstractGeneration of synthetic data for energy demand allows simulation-based forecasting for infrastructure
planning, building optimization, and energy management, key elements of smart cities. This study compares
multivariate kernel density estimation (KDE) and time-series generative adversarial networks (TimeGAN)
for their ability to generate realistic time series that preserve crucial feature relationships for forecasting.
The evaluation is based on both statistical similarity and predictive performance using machine learning
models, focusing on seasonal and hourly consumption patterns. The results emphasize the importance of
temporal consistency and justify synthetic augmentation when real data is limited, especially for time-aware
energy forecasting tasks, and demonstrate how synthesized data can be used when forecasting future energy
demand. pdf Data Driven, Python, Supply Chain, Data Science and SimulationSystem Dynamics Session Chair: Abdolreza Abhari (Toronto Metropolitan University) Advanced Dynamic Spare Parts Inventory Management Utilizing Machine Health Data Best Contributed Applied Paper - Finalist Jennifer Kruman, Avital Kaufman, and Yale Herer (Technion) Program Track: Data Science and Simulation Program Tags: Python, Supply Chain Abstract AbstractThis research presents a novel approach to spare parts inventory management by integrating real-time machine health data with dynamic, state-dependent inventory policies. Traditional static models overlook the evolving conditions of industrial machinery. Leveraging advanced digital technologies, such as those pioneered by Augury, our framework dynamically adjusts inventory levels, reducing costs and improving service. Using Markov chain modeling, simulation, and industry collaboration, we demonstrate up to 29% cost savings with state-dependent policies over static base-stock models. Sensitivity analysis confirms the robustness of these strategies. pdfSupply Chain Optimization via Generative Simulation and Iterative Decision Policies Haoyue Bai (Arizona State University); Haoyu Wang (NEC Labs America.); Nanxu Gong, Xinyuan Wang, and Wangyang Ying (Arizona State University); Haifeng Chen (NEC Labs America.); and Yanjie Fu (Arizona State University) Program Track: Data Science and Simulation Program Tags: Data Driven, Python, Supply Chain Abstract AbstractHigh responsiveness and economic efficiency are critical objectives in supply chain transportation, both of which are influenced by strategic decisions on shipping mode. An integrated framework combining an efficient simulator with an intelligent decision-making algorithm can provide an observable, low-risk environment for transportation strategy design. An ideal simulation-decision framework must (1) generalize effectively across various settings, (2) reflect fine-grained transportation dynamics, (3) integrate historical experience with predictive insights, and (4) maintain tight integration between simulation feedback and policy refinement. We propose Sim-to-Dec framework to satisfy these requirements. Specifically, Sim-to-Dec consists of a generative simulation module, which leverages autoregressive modeling to simulate continuous state changes, reducing dependence on handcrafted domain-specific rules and enhancing robustness against data fluctuations; and a history–future dual-aware decision model, refined iteratively through end-to-end optimization with simulator interactions. Extensive experiments conducted on three real-world datasets demonstrate that Sim-to-Dec significantly improves timely delivery rates and profit. pdfPredictive Vision of Physics of Decision: Modelisation of Virus Propagation with Force Fields Paradigm Benoit Morvan (IMT Mines Albi) Program Track: Data Science and Simulation Abstract AbstractThis paper introduces a novel method for simulating complex system behaviors using a specific geometric space and force fields within that space. The approach considers the system's performance as a physical trajectory defined by its performance indicators and environmental attributes, which can be deviated by force fields representing risks or opportunities within the system. The primary contribution of this work is the proposal of a method that uses multiple trajectories of a defined system to identify force fields that accurately represent the system pdf Complex Systems, Data Driven, Process Mining, Python, Data Science and SimulationData Driven Modeling Session Chair: Niclas Feldkamp (Ilmenau University of Technology) Automated Business Process Simulation Studies: Where do Humans Fit In? Samira Khraiwesh and Luise Pufahl (Technical University of Munich) Program Track: Data Science and Simulation Program Tags: Data Driven, Process Mining Abstract AbstractBusiness Process Simulation (BPS) is crucial for enhancing organizational efficiency and decision-making, enabling organizations to test process changes in a virtual environment without real-world consequences. Despite advancements in automatic simulation model discovery using process mining, BPS is still underused due to challenges in accuracy. Human-in-the-Loop (HITL) integrates human expertise into automated systems, where humans guide, validate, or intervene in the automation process to ensure accuracy and context. This paper introduces a framework identifying key stages in BPS studies where HITL can be applied and the factors influencing the degree of human involvement. The framework is based on a literature review and expert interviews, providing valuable insights and implications for researchers and practitioners. pdfPENM: A Parametric Evolutionary Network Model for Scholar Collaboration Network Simulation Jiadong Liu, Keqin Guan, and Siyu Chen (Tsinghua University); Wei Chen (Tsinghua University, Tencent Technology (Shenzhen) Co. LTD.); and Wai Kin Victor Chan (Tsinghua University) Program Track: Data Science and Simulation Program Tag: Python Abstract AbstractIdentifying suitable collaborators has become an important challenge in research management, where insights into the structure and evolution of scholar collaboration networks are essential. However, existing studies often adopt static or locally dynamic views, limiting their ability to capture long-term network evolution. To address these issues, this paper introduces PENM (Parametric Evolutionary Network Model), a simulation framework designed to model the evolution of scholar collaboration networks through parametric mechanisms. The PENM simulates node and edge evolution through probabilistic rules and tunable parameters, reflecting realistic academic behaviors like cumulative collaboration and co-author expansion. We provide a theoretical analysis of the model's growth patterns under varying parameters and verify these findings through simulation. Evaluations on real-world datasets demonstrate that PENM evolves networks with degree distributions closely aligned with actual scholar networks. PENM offers a versatile simulation-based approach for modeling academic collaboration dynamics, enabling applications and simulation of future academic ecosystems. pdfExploring Data Requirements for Data-Driven Agent-Based Modeling Hui Min Lee (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc Kinney Moller Institute) Program Track: Data Science and Simulation Program Tags: Complex Systems, Data Driven Abstract AbstractExtracting Agent-Based Models (ABMs) from data, also known as Data-Driven Agent-Based Modeling (DDABM), requires a clear understanding of data requirements and their mappings to the corresponding ABM components. DDABM is a relatively new and emerging topic, and as such, there are only highly customized and problem-specific solutions and approaches. In our previous work, we presented a framework for DDABM, identifying the different components of ABMs that can be extracted from data. Building on this, the present study provides a comprehensive analysis of existing DDABM approaches, examining prevailing trends and methodologies, focusing on the mappings between data and ABM components. By synthesizing and comparing different DDABM approaches, we establish explicit mappings that clarify data requirements and their role in enabling DDABM. Our findings enhance the understanding of DDABM and highlight the role of data in automating model extraction, highlighting its potential for advancing data-driven agent-based simulations. pdf Data Driven, Process Mining, Python, R, Data Science and SimulationData Analysis in Healthcare Applications Session Chair: Emma Von Hoene (George Mason University) Modeling and Simulation of Surgical Procedures with an Application to Laparoscopic Cholycystectomy Yiyu Wang and Vincent Augusto (Ecole des Mines de Saint-Etienne), Canan Pehlivan (IMT Mines Albi), Julia Fleck (Ecole des Mines de Saint-Etienne), and Nesrine Mekhenane (Chaire Bopa) Program Track: Data Science and Simulation Program Tags: Process Mining, Python Abstract AbstractSurgeons’ actions are key to surgical success. Our objective is to develop a decision-support tool to help prioritize patient safety and reduce risks during surgery. We propose a structured mathematical framework that defines key components of a surgical procedure, making it adaptable to various types of surgeries. Using the CholecT50 dataset, we generate and pre-process event logs to construct a process map that models the surgical workflow through Process Mining techniques. This process map provides insights into procedural patterns and can be visualized at different levels of granularity to align with surgeons’ needs. To validate its effectiveness, we simulate synthetic surgeries and assess the process map’s performance in
replicating real surgical workflows. By demonstrating the generalizability of our approach, this work paves the way for the development of an advanced decision-support tool that can assist surgeons in real-time decision-making and post-operative analysis. pdfEvaluating the Transferability of a Synthetic Population Generation Approach for Public Health Applications Emma Von Hoene (George Mason University); Aanya Gupta (Thomas Jefferson High School for Science and Technology); and Hamdi Kavak, Amira Roess, and Taylor Anderson (George Mason University) Program Track: Data Science and Simulation Program Tags: Data Driven, R Abstract AbstractSimulations are valuable in public health research, with synthetic populations enabling realistic policy analysis. However, methods for generating synthetic populations with domain-specific characteristics remain underexplored. To address this, we previously introduced a population synthesis approach that directly integrates health surveys. This study evaluates its transferability across health outcomes, locations, and timeframes through three case studies. The first generates a Virginia population (2021) with COVID-
19 vaccine intention, comparing results to probabilistic and regression-based approaches. The second synthesizes populations with depression (2021) for Virginia, Tennessee, and New Jersey. The third constructs Virginia populations with smoking behaviors for 2021 and 2022. Results demonstrate the method’s transferability for various health applications, with validation confirming its ability to capture accuracy, statistical relationships, and spatial heterogeneity. These findings enhance population synthesis for public health simulations and offer new datasets with small-area estimates for health outcomes, ultimately supporting public health decision-making. pdf
Environment, Sustainability, and Resilience Track Coordinator - Environment, Sustainability, and Resilience: Jiaqi Ge (Leeds University), Shima Mohebbi (George Mason University) AnyLogic, Complex Systems, Conceptual Modeling, Open Source, Python, Resiliency, Environment, Sustainability, and ResilienceSimulation and Socio-Environmental Resilience Session Chair: Daniel Jun Chung Hii (Kajima Corporation, Kajima Technical Research Institute Singapore) A Framework for Modeling and Simulation of Multi-dimensional Coupled Socio-Environmental Networked Experiments Vanessa Ines Cedeno (University of Virginia, Escuela Superior Politecnica del Litoral) and Majid Shafiee-Jood (University of Virginia) Program Track: Environment, Sustainability, and Resilience Program Tags: Complex Systems, Conceptual Modeling Abstract AbstractCoupled Socio-Environmental Networked experiments have been used to represent and analyze complex social phenomena and environmental issues. There is a lack of theory on how to accurately model diverse entities and the connections between them across different spatial and temporal scales. This gap often leads to significant challenges in the modeling, simulating, and analysis of formal experiments. We propose a framework that facilitates software implementation of multi-dimensional coupled socio-environmental networked experiments. Our approach includes: (i) a formal data model paired with a computational model, together providing abstract representations, and (ii) a modeling cycle that maps socio-environmental interactions over time, allowing for multi-action, interactive experiments. The framework is flexible, allowing for a wide variety of network models, interactions, and action sets. We demonstrate its applicability through a case study on agroecological transitions, showing how the modeling cycle and data model can be used to explore socio-environmental phenomena. pdfInfluence of Norms in Alliance Characteristics of Humanitarian Food Agencies: Capability, Compatibility and Satisfaction Naimur Rahman Chowdhury and Rashik Intisar Siddiquee (North Carolina State University) and Julie Simmons Ivy (University of Michigan) Program Track: Environment, Sustainability, and Resilience Program Tags: Conceptual Modeling, Python, Resiliency Abstract AbstractHunger relief networks consist of agencies that work as independent partners within a food bank network. For these networks to effectively and efficiently reduce food insecurity, strategic alliances between agencies are crucial. Agency preference for forming alliances with other agencies can impact network structure and network satisfaction. In this paper, we explore the compatibility and satisfaction achieved by alliances between different agencies. We introduce two agency norms: conservative and diversifying. We develop an agent-based simulation model to investigate alliance formation in a network. We evaluate network satisfaction, satisfaction among different types of agencies, and alliance heterogeneity. We test the statistical significance of satisfaction within a norm and between norms for different agencies. Findings reveal that the ‘diversifying’ norm in the network reduces gaps in satisfaction between strong and weak agencies, ensuring fairness for weaker agencies in the network, whereas the ‘conservative’ norm favors moderate agencies in the network. pdfBuilding a Climate Responsive Agent-Based Modeling Simulation for the Walkability of the Tropical Hot and Humid Environment Daniel Jun Chung Hii and Takamasa Hasama (Kajima Corporation); Majid Sarvi (The University of Melbourne); and Marcel Ignatius, Joie Yan Yee Lim, Yijun Lu, and Nyuk Hien Wong (National University of Singapore) Program Track: Environment, Sustainability, and Resilience Program Tags: AnyLogic, Open Source Abstract AbstractClimate change affects thermal comfort and wellness by restricting walkability potential of the built environment. This is especially in the outdoors under the harsh solar radiation exposure of the tropical hot and humid climate. Passive shading strategy plays the most significant role in the walkability potential. Vegetation and man-made structures such as pavements provide shade for comfortable navigation, with the latter being a more sustainable and wellbeing friendly solution. The walkability potential can be simulated using agent-based modelling (ABM) technique. As a heat mitigation strategy to improve the walkability, the most direct intervention is to improve the connectivity of the shading zone along the shortest path between strategic locations. People tend to walk faster and choose the shortest path when dealing with direct sun exposure while avoiding it totally if it gets unbearably hot. The ABM simulation is useful for efficient urban planning of walkability potential in campus. pdf Environment, Sustainability, and ResilienceCritical Infrastructure Resilience Session Chair: xudong wang (university of tennessee, knoxville) Agent-Based Simulation of Price-Demand Dynamics in Multi-Service Charging Station xudong wang (university of tennessee, knoxville); Yang Chen (Oak Ridge National Laboratory); Brody Skipper (University of Tennessee - Knoxville); Olufemi A. Omitaomu (Oak Ridge National Laboratory); and Xueping Li (University of Tennessee - Knoxville) Program Track: Environment, Sustainability, and Resilience Abstract AbstractAs the adoption of electric vehicles and hydrogen fuel-cell vehicles grows, understanding how dynamic pricing strategies influence charging and refueling behaviors becomes crucial for optimizing local energy markets. This paper proposes a simulation-based analysis of a hydrogen-electricity integrated charging station that serves both types of vehicles. A multi-agent simulation framework is developed to model the interactions between vehicles and the station, incorporating price- and delay-sensitive behaviors in decision-making. The station can dynamically adjust energy prices, while vehicles optimize their charging or refueling choices based on their utility values. A series of sensitivity analyses are conducted to evaluate how electricity pricing, infrastructure capacity, and waiting behavior impact station performance. Results highlight that moderate electricity prices maximize user participation without sacrificing profit, infrastructure should be right-sized to demand to avoid over- or underutilization, and delay-toleration also affects service outcomes, which may reach the maximum service coverage at the threshold of 45 minutes. pdfA Theory to Quantitatively Estimate and Bound Systemic Cyber Risk Ranjan Pal, Konnie Duan, Sander Zeijlemaker, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Environment, Sustainability, and Resilience Abstract AbstractBusiness enterprises have grappled in the last one and half decade with unavoidable risks of (major) cyber incidents on critical infrastructure (e.g., power grid, cloud systems). The market to manage such risks using cyber insurance (CI) has been growing steadily (but not fast enough) as it is still skeptical of the extent of economic and societal impact of systemic cyber risk across networked supply chains in interdependent IT-driven enterprises. To demystify this skepticism, we study in this paper the role of (a) the statistical nature of multiple enterprise cyber risks contributing to aggregate supply chain risk and (b) the graph structure of the underlying enterprise supply chain network, in the statistical estimation and spread of aggregate cyber risk. More specifically, we provide statistical tail bounds on the aggregate cyber-risk that a cyber risk management firm such as a cyber insurer is exposed to in a supply chain. pdf Conceptual Modeling, Data Driven, Environment, Sustainability, and ResilienceResilient Energy Systems Session Chair: Primoz Godec (Uppsala University) Utilization of Virtual Commissioning for Simulation-Based Energy Modeling and Dimensioning of DC-Based Production Systems Martin Barth (Friedrich-Alexander-Universität Erlangen Nürnberg); Philipp Herkel (Friedrich-Alexander-Universität Erlangen-Nürnberg,); Benjamin Gutwald and Jan Hinrich Krüger (Friedrich-Alexander-Universität Erlangen-Nürnberg); Tobias Schrage (Technische Hochschule Ingolstadt); and Tobias Reichenstein and Jörg Franke (Friedrich-Alexander-Universität Erlangen-Nürnberg) Program Track: Environment, Sustainability, and Resilience Program Tag: Conceptual Modeling Abstract AbstractThe transition toward DC-based industrial grids demands accurate yet efficient planning methods. This paper presents a simulation-driven approach that integrates existing virtual commissioning (VC) models to estimate power demand in early design stages. A minimal-parameter modeling technique is proposed to extract dynamic load profiles from 3D multibody simulations, which are then used for electrical component sizing and system optimization. The methodology is validated using a demonstrator setup consisting of a lift tower and a six-axis industrial robot, both tested under AC and DC operation. Simulation results show close agreement with real measurements, highlighting the method’s ability to capture realistic load behavior with low modeling effort. The approach offers seamless integration into existing planning workflows and supports dimensioning of safe, efficient, and regulation-compliant DC production systems. This contributes to reducing component oversizing, improving energy efficiency, and accelerating the adoption of DC grid architectures in industrial environments. pdfBalancing Airport Grid Load: The Role of Smart EV Charging, Solar, and Batteries Primoz Godec and Steve McKeever (Uppsala University) Program Track: Environment, Sustainability, and Resilience Program Tag: Data Driven Abstract AbstractAs the aviation industry works to reduce carbon emissions, airport energy optimization has also been brought into focus. This study explores strategies to reduce peak electricity demand at a major Swedish airport driven by increased electric vehicle charging (EV). EV charging increases grid load, but integrating solar power and battery storage helps stabilize fluctuations and reduce peaks. We present a framework and simulate the combined impact of these factors, demonstrating that smart scheduling with solar and battery systems effectively balances the load. This approach reduces high-load occurrences from 8.6\% to 2.5\%---where 100\% would mean exceeding the threshold year-round---even with 500 additional charging points. pdf
Healthcare and Life Sciences Track Coordinator - Healthcare and Life Sciences: Tugce Martagan (Northeastern University), Varun Ramamohan (Department of Mechanical Engineering, Indian Institute of Technology Delhi) Healthcare and Life SciencesInfectious Disease Modeling and Interventions Session Chair: Sarah Mulutzie (North Carolina State University) Quantifying the Impact of Proactive Community Case Management on Severe Malaria Cases Using Agent-Based Simulation Xingjian Wang and Hannah Smalley (Georgia Institute of Technology), Julie Gutman (Centers for Disease Control and Prevention), and Pinar Keskinocak (Georgia Institute of Technology) Program Track: Healthcare and Life Sciences Abstract AbstractMalaria remains a major global health threat, especially for children under five, causing hundreds of thousands of deaths annually. Proactive Community Case Management (ProCCM) is an intervention designed to enhance early malaria detection and treatment through routine household visits (sweeps), complementing existing control measures. ProCCM is crucial in areas with limited healthcare access and low treatment-seeking rates, but its effectiveness depends on transmission intensity and the coverage of existing interventions. To quantify the impact of ProCCM, we calibrated an agent-based simulation model for settings with seasonal transmission and existing interventions. We evaluated how different ProCCM scheduling strategies perform under varying treatment-seeking rates in reducing severe malaria cases. Our proposed heuristics—greedy and weighted—consistently outperformed a standardized, uniformly spaced approach, offering practical guidance for designing more effective and adaptive malaria control strategies. pdfDeveloping and Deploying a Use-inspired Metapopulation Modeling Framework for Detailed Tracking of Stratified Health Outcomes Arindam Fadikar, Abby Stevens, Sara Rimer, Ignacio Martinez-Moyano, and Nicholson Collier (Argonne National Laboratory); Chol Mabil, Emile Jorgensen, Peter Ruestow, and V. Eloesa McSorley (Chicago Department of Public Health); and Jonathan Ozik and Charles Macal (Argonne National Laboratory) Program Track: Healthcare and Life Sciences Abstract AbstractPublic health experts studying infectious disease spread often seek granular insights into population health outcomes. Metapopulation models offer an effective framework for analyzing disease transmission through subpopulation mixing. These models strike a balance between traditional, homogeneous mixing compartmental models and granular but computationally intensive agent-based models. In collaboration with the Chicago Department of Public Health (CDPH), we developed MetaRVM, an open-source R package for modeling the spread of infectious diseases in subpopulations, which can be flexibly defined by geography, demographics, or other stratifications. MetaRVM is designed to support real-time public health decision-making and through its co-creation with CDPH, we have ensured that it is responsive to real-world needs. We demonstrate its flexible capabilities by tracking influenza dynamics within different age groups in Chicago, by integrating an efficient Bayesian optimization-based calibration approach. pdfModeling Social Influence on COVID-19 Vaccination Uptake within an Agent-Based Model Sarah Mulutzie, Sebastian Rodriguez-Cartes, Osman Ozaltin, Julie Swann, and Maria Mayorga (North Carolina State University) Program Track: Healthcare and Life Sciences Abstract AbstractVaccination is a critical intervention to mitigate the impact of infectious disease outbreaks. However, vaccination decision is complex and influenced by various factors such as individual beliefs, access to vaccines, trust in healthcare systems, and importantly, social norms within communities, the shared understandings and expectations about vaccination behavior. This paper analyzes the impact of social norms on vaccine uptake and subsequent disease transmission by explicitly incorporating these norms into an extension of the agent-based COVID-19 simulation model, COVASIM. We aim to analyze how social norms affect vaccination rates and disease spread. We demonstrate this by implementing community-specific vaccination norms that influence agents through the perceived vaccination behaviors of their social networks. Our simulated case study explored targeted communication about vaccination uptake through different age groups. Through this intervention, we examined the effectiveness of adjusting perceptions of community vaccine uptake to better align with its true value. pdf Simio, Healthcare and Life SciencesSimulation in Emergency and Critical Care Session Chair: Zhaoqi Wang (University of Toronto) Optimizing Emergency Department Throughput: A Discrete Event Simulation Study to Mitigate the Impact of Imminent Patient Volume Increases at Standalone Emergency Department Liam Coen (Northwell Health); Peter Woods (Northwell Greenwich Village Hospital, Northwell Health); Rachel Bruce (Northwell Greenwich Village Hospital); Gillian Glenn (Northwell Greenwich Village Hospital, Northwell Health); and Shaghayegh Norouzzadeh (Northwell Health) Program Track: Healthcare and Life Sciences Program Tag: Simio Abstract AbstractThis study optimized resource allocation in a standalone Emergency Department projected to experience a 10-30% patient volume increase. Combining data analysis, interviews, and process mapping, a Discrete Event Simulation model was created in Simio, replicating patient flow. The model revealed the ED could manage a 20% volume surge with minor staffing adjustments while maintaining current resources. At 20% increased volume, key metrics such as door-to-provider and treat-and-release times increased to 18 and 200 minutes, surpassing 2023 results by 38% and 12%, respectively. However, exceeding 20% led to an 87% utilization rate for nighttime nurses, creating a potential bottleneck. Minor staffing adjustments mitigated increased treat-and-release times under moderate volume surges, and the site used simulation optimization results to add an 8-hour shift of provider support in the Sunday nighttime hours. This framework offers valuable insights for other EDs anticipating similar challenges, enabling proactive resource management and process optimization. pdfEvaluating the Impact of Psychiatric Emergency Units on Pediatric Mental and Behavioral Health Outcomes Farzad Zeinali, Kevin Taaffe, Samuel Nelson Koscelny, David Neyens, Devi Soman, and Anjali Joseph (Clemson University) and Ann Dietrich (Prisma Health) Program Track: Healthcare and Life Sciences Abstract AbstractThe increasing prevalence of pediatric mental and behavioral health (MBH) conditions has driven a rise in emergency department (ED) visits, often worsening crowding and straining resources. Psychiatric Emergency Units (PEUs) have emerged as a potential solution to address these challenges by diverting medically stable MBH patients into a calm, specialized setting. We developed a discrete-event simulation of a pediatric ED setting in South Carolina to evaluate the impact of implementing a PEU. Times across various patient journey segments and resource utilization were assessed under varying patient arrival rates and unit capacities. Results showed a shorter length of stay, faster time to disposition and psychiatric evaluation times, and improved room and bed utilization. These findings suggest that PEUs can help hospitals manage increasing MBH volumes more effectively, mitigate system overload, and enhance the quality of pediatric MBH care. pdfSimulating Patient-Provider Interaction in ICU Alarm Response: A Hybrid Modeling Approach Zhaoqi Wang (University of Toronto), Christopher Parshuram (The Hospital for Sick Children), and Michael Carter (University of Toronto) Program Track: Healthcare and Life Sciences Abstract AbstractPatients in intensive care units (ICU) require continuous monitoring and care. When physiological abnormalities occur, an alarm is triggered to alert healthcare providers. The alarm response time is influenced by factors such as patient-population profile, care team configuration, staff workload, and unit layout. Response delays can directly impact patient outcomes, emphasizing the need for adequate emergency management capability. While prior simulation studies have explored ICU operations, they often oversimplify dynamic and concurrent interactions between patients and healthcare providers. This study presents a proof-of-concept hybrid simulation model that integrates discrete event simulation (DES) and agent-based simulation (ABS) to comprehensively represent a pediatric ICU environment. By simulating routine activities, patient-triggered alarms, and real-time interactions, the model investigates how varied resource configurations affect response times and outcomes. Built on scalable logic and realistic workflows, the model serves as a foundation for future clinical data integration and supports the development of ICU decision-support applications. pdf Neural Networks, Python, SIMUL8, Healthcare and Life SciencesHospital Operations and Capacity Management Session Chair: Marta Cildoz (Public University of Navarre) Discrete Event Simulation for Sustainable Hospital Pharmacy: The Case of Aseptic Service Unit Fatemeh Alidoost, Navonil Mustafee, Thomas Monks, and Alison Harper (University of Exeter) Program Track: Healthcare and Life Sciences Program Tag: SIMUL8 Abstract AbstractWithin hospital pharmacies, aseptic units preparing high-risk injectable medicines face environmental and economic challenges due to resource-intensive processes and emissions. Variability in patient dosage requirements leads to inefficient drug vial usage, resulting in waste generation, carbon emissions generation from waste, and increased costs. Batching could be used to reduce resource consumption and reduce waste associated with single-dose preparation. This study develops a discrete event simulation, as a tool for strategy evaluation and experimentation, to assess the impact of batching on productivity and sustainability. The model captures key process dynamics, including prescriptions arrivals, production processes, and resource consumed. By experimenting with time-sensitive and size-based batching, the study evaluates their effects on the reduction of medical and nonmedical waste, thereby contributing to cost savings, reduction of carbon emissions, and productivity by enhancing workflow efficiency. This study offers insights for hospital pharmacies to evaluate batching strategies effectiveness for reducing waste and promoting sustainability. pdfMulti-fidelity Simulation Framework for the Strategic Pooling of Surgical Assets Sean Shao Wei Lam (Singapore Health Services, Duke NUS Medical School); Boon Yew Ang (Singapore Health Services); Marcus Eng Hock Ong (Singapore Health Services, Duke NUS Medical School); and Hiang Khoon Tan (Singapore General Hospital, SingHealth Duke-NUS Academic Medicine Centre) Program Track: Healthcare and Life Sciences Program Tags: Neural Networks, Python Abstract AbstractThis study describes a multi-fidelity simulation framework integrating a high-fidelity discrete event simulation (DES) model with a machine learning (ML)-based low-fidelity model to optimize operating theatre (OT) scheduling in a major public hospital in Singapore. The high-fidelity DES model is trained and validated with real-world data and the low-fidelity model is trained and validated with synthetic data derived from simulation runs with the DES model. The high-fidelity model captures system complexities and uncertainties while the low-fidelity model facilitates policy optimization via the multi-objective non-dominated sorting genetic algorithm (NSGA-II). The optimization algorithm can identify Pareto-optimal policies under varying open access (OA) periods and strategies. Pareto optimal policies are derived across the dual objectives in maximizing OT utilization (OTU) and minimizing waiting time to surgery (WTS). These policies support post-hoc evaluation within an integrated decision support system (DSS). pdfSupporting Strategic Healthcare Decisions With Simulation: A Digital Twin For Redesigning Traumatology Services Marta Cildoz and Miguel Baigorri (Public University of Navarre), Isabel Rodrigo-Rincón (University Hospital of Navarre), and Fermin Mallor (Public University of Navarre) Program Track: Healthcare and Life Sciences Program Tag: Python Abstract AbstractReducing waiting times in specialized healthcare has become a pressing concern in many countries, particularly in high-demand services such as traumatology. This study introduces a simulation-based approach to support strategic decision-making for redesigning the referral interface between Primary Care and specialized care, as well as reorganizing internal pathways in the Traumatology Service of the University Hospital of Navarre (Spain). A discrete-event simulation model, developed using real patient data and designed to capture the system’s transient behavior from its current state, is employed to evaluate the effects of these changes on key performance indicators such as number of consultations per patient, physician workload, and waiting list reduction. The model also evaluates how different referral behaviors among Primary Care physicians influence system performance. Results demonstrate the model’s capacity to provide evidence-based guidance for strategic healthcare decisions and highlight its potential to evolve into a digital twin for continuous improvement and operational planning. pdf Open Source, Python, Healthcare and Life SciencesPatient Access and Care Delivery Session Chair: Vishnunarayan Girishan Prabhu (University of Central Florida) A Simulation-Based Evaluation of Strategies for Communicating Appointment Slots to Outpatients Aparna Venkataraman (University of Queensland, Indian Institute of Technology Delhi); Sisira Edirippulige (University of Queensland); and Varun Ramamohan (Indian Institute of Technology Delhi) Program Track: Healthcare and Life Sciences Program Tags: Open Source, Python Abstract AbstractIn this paper, we consider an outpatient consultation scheduling system with equal-length slots wherein a set of slots each day are reserved for walk-ins. Specifically, we consider the following questions in deciding slot start times to communicate to scheduled patients: (a) should information regarding patient arrival with respect to the slot start time communicated to them (arrival offset with respect to slot start – i.e., are they typically late or early) be considered in deciding the slot start time for communication, and (b) what impact does rounding the slot start time to the nearest 5th or 10th minute have on relevant outcomes? We answer these questions using a validated discrete-event simulation of an FCFS outpatient appointment system in a hospital accommodating both scheduled and walk-in patients. We also describe the development of the simulation itself, which is designed to optimize policies regarding management of walk-in patients and integration of telemedicine. pdfTransforming American Prenatal Care Delivery Through Discrete Event Simulation Adrianne Blanton, Annie He, Jillian Uy, Krithika Venkatasubramanian, Leena Ghrayeb, Vincenzo Loffredo, Amirhossein Moosavi, Amy Cohn, and Alex Peahl (University of Michigan) Program Track: Healthcare and Life Sciences Abstract AbstractFormalized models of prenatal care delivery have changed little since 1930, typically including 12-14 in-person visits. Recently released guidelines recommend tailoring prenatal care visit frequency based on patient risk level, potentially integrating telemedicine to increase flexibility for patients. In this paper, we design a discrete event simulation-based approach to study the impact of these new guidelines on healthcare systems, focusing on three main metrics: appointment slot utilization, appointment delays due to lack of capacity, and skipped appointments due to patient cancellations. We find that the tailored approach reduces appointment slot utilization while supporting the same volume of patients as the conventional approach. We also see a decrease in appointment delays and skipped appointments. Our findings suggest that adopting a tailored prenatal care model can reduce provider burnout, allow clinics to accept more patients, and enhance patients’ care experience. pdfHybrid Modeling and Simulation for Enhancing Patient Access, Safety and Experience Vishnunarayan Girishan Prabhu (University of Central Florida) and Anupama Ramachandran and Steven Alexander (Stanford Medicine Health Care) Program Track: Healthcare and Life Sciences Abstract AbstractIn recent years, hybrid modelling and simulation have become increasingly popular in healthcare for analyzing and improving systems such as patient flow, resource allocation, scheduling, and policy evaluation. These methods combine at least two simulation approaches discrete-event simulation, system dynamics, and agent-based modelling and may also integrate techniques from operations research and management sciences. Their ability to represent complex, dynamic systems has driven their adoption across various healthcare domains. This paper presents a case study of an emergency department (ED) where a hybrid framework combining forecasting models, hybrid simulation, and mixed-integer linear programming was used to optimize physician shift scheduling and improve patient flow and safety. The model outperformed current practices by reducing patient handoffs by 5.6% and decreasing patient time in the ED by 9.2%, without a budget increase. Finally, we propose incorporating reinforcement learning in future work to enable adaptive, data-driven decision-making and further enhance healthcare delivery performance. pdf Healthcare and Life SciencesPanel: Advancing Simulation in Healthcare and Life Sciences: A Panel Discussion of Future Research Directions Session Chair: Tugce Martagan (Northeastern University) Advancing Simulation in Healthcare and Life Sciences: A Panel Discussion of Future Research Directions Moria F. Bittmann (National Institute of Health), Chaitra Gopalappa (University of Massachusetts- Amherst), Tugce Martagan (Northeastern University), Maria E. Mayorga (North Carolina State University), Anup C. Mokashi (Memorial Sloan Kettering Cancer Center), and Varun Ramamohan (Indian Institute of Technology Delhi) Program Track: Healthcare and Life Sciences Abstract AbstractThis paper is motivated by a panel organized by the Healthcare and Life Sciences track at the 2025 Winter Simulation Conference (WSC). We summarize the panelists' perspectives and reflect on current trends and future research directions for simulation applications in healthcare and life sciences. We begin with a brief review of key methodologies and application trends from the past decade of WSC proceedings. We then present expert insights from a range of application areas, including (bio)pharmaceutical manufacturing, hospital operations, public health and epidemiology, and modeling human behavior. The panelists provide diverse perspectives from academia and industry, and highlight emerging challenges, opportunities, and future research directions to advance simulation in healthcare and life sciences. pdf Python, Healthcare and Life SciencesResource Allocation and Quality in Healthcare Systems Session Chair: Coen Dirckx (MSD, Netherlands; Eindhoven University of Technology) Agent-Based Model of Dynamics between Objective and Perceived Quality of Healthcare System Jungwoo Kim (KAIST), Moo Hyuk Lee (Seoul National University College of Medicine), Ji-Su Lee (KAIST), Young Kyung Do (Seoul National University College of Medicine), and Taesik Lee (KAIST) Program Track: Healthcare and Life Sciences Program Tag: Python Abstract AbstractNationwide patient concentration poses a significant burden on healthcare systems, largely due to patients’ perception that metropolitan regions offer superior care quality. To better understand this phenomenon, we present an agent-based model to examine how objective quality (OQ) and perceived quality (PQ) co-evolve in a free-choice healthcare system, using South Korea as a salient case. Four mechanisms — preferential hospital choice, scale effect, quality recognition, and word-of-mouth — form a feedback loop: concentration raises OQ, utilization updates PQ, and perceptions diffuse through the population. We identify three emergent phenomena — local dominance, global dominance, and asymmetric quality recognition — and interpret how each contributes to patient outmigration. Building on these insights, we further explore strategies such as “local tiering” and “information provision.” This model-based approach deepens understanding of OQ–PQ dynamics, and offers insights for addressing nationwide healthcare utilization in various contexts. pdfEvaluating Liver Graft Acceptance Policies Using a Continuous-Time Markov Chain Simulation Framework Jiahui Luo (Dartmouth College); Mariel S. Lavieri, David W. Hutton, Lawrence C. An, and Neehar D. Parikh (University of Michigan); and Wesley J. Marrero (Dartmouth College) Program Track: Healthcare and Life Sciences Abstract AbstractLiver transplantation is the second most common transplant procedure in the United States and the only curative treatment for patients with end-stage liver disease, which is one of the leading causes of death nationwide. The United Network for Organ Sharing operates the national liver transplant waiting list and allocates organs under a complex priority system based on medical urgency, geography, and waiting time. Healthcare providers accept or refuse liver offers based on transplant candidates’ medical needs and donor quality, among other factors. We develop a simulation environment to assess current acceptance practices based on a Markov reward process. Our simulation framework models organ arrivals and patients' health progression as continuous-time processes and mimics how decisions are made in practice using a randomized policy. Based on our simulation framework, we provide insights and identify areas for enhancing patient management and liver offer acceptance. pdfBackpass in Biomanufacturing: Effective Strategies for Sharing Bioreactors Coen Dirckx (MSD, Netherlands; Eindhoven University of Technology); Rick Kapteijns (Eindhoven University of Technology); Melvin Drent (Tilburg University); and Tugce Martagan (Northeastern University) Program Track: Healthcare and Life Sciences Abstract AbstractBiopharmaceutical drugs have transformed modern medicine, yet their manufacturing processes remain challenged by yield variability and rising production costs. This paper explores a promising application in a new domain to improve biomanufacturing efficiency through the \textit{backpass} production method. Our production setting consists of two bioreactors: a dedicated bioreactor for production of a high-value product A, and a shared bioreactor for product A and a lower-value product B. The backpass method allows biomass to be transferred from the dedicated to the shared bioreactor, enabling additional production of product A while bypassing extensive upstream processing steps. We examine the performance of three backpass strategies, defined based on feedback from our industry partner, and use discrete event simulation to answer industry-specific questions related to the system's performance regarding throughput and profitability. This analysis provides practitioners with a decision-support framework for capital investments and operational planning. pdf Data Analytics, Open Source, Ranking and Selection, Healthcare and Life SciencesMethodological Perspectives and Applications in Healthcare Session Chair: Negar Sadeghi (Northeastern University) Mapping Applications of Computer Simulation in Orthopedic Services: A Topic Modeling Approach Alison L. Harper, Thomas Monks, Navonil Mustafee, and Jonathan T. Evans (University of Exeter) and Al-Amin Kassam (Royal Devon University Healthcare) Program Track: Healthcare and Life Sciences Program Tags: Data Analytics, Open Source Abstract AbstractOrthopedic health services are characterized by high patient volumes, long elective waits, unpredictable emergency demand, and close coupling with other hospital processes. These present significant challenges for meeting operational targets and maintaining quality of care. In healthcare, simulation has been widely used for addressing similar challenges. This systematic scoping review identifies and analyzes academic papers using simulation to address operational-level challenges for orthopedic service delivery. We analyzed 37 studies over two decades, combining a structured analysis with topic modelling to categorize and map applications. Despite widespread recognition of its potential, simulation remains underutilized in orthopedics, with fragmented application and limited real-world implementation. Recent trends indicate a shift toward system-wide approaches that better align with operational realities and stakeholder needs. Future research should aim to bridge methodological innovation with collaboration and practical application, such as hybrid and real-time simulation approaches focusing on stakeholder needs, and integrating relevant operational performance metrics. pdfStopping Rules for Sampling in Precision Medicine Mingrui Ding (City University of Hong Kong, Beihang University); Siyang Gao (City University of Hong Kong); and Qiuhong Zhao (Beihang University) Program Track: Healthcare and Life Sciences Program Tag: Ranking and Selection Abstract AbstractPrecision medicine (PM) is an approach that aims to tailor treatments based on patient profiles (patients' biometric characteristics). In PM practice, treatment performance is typically evaluated through simulation models or clinical trials. Although these two methods have differences in their sampling subjects and requirements, both are based on a sequential sampling process and require determining a stopping time for sampling to ensure that, with a prespecified confidence level, the best treatment is correctly identified for each patient profile. In this research, we propose unified stopping rules applicable to both simulation and clinical trial-based PM sampling processes. Specifically, we adapt the generalized likelihood ratio (GLR) test to determine when samples collected are sufficient and calibrate it using mixture martingales with a peeling method. Our stopping rules are theoretically grounded and can be integrated with different types of sampling strategies. Numerical experiments on synthetic problems and a case study demonstrate their effectiveness. pdfSimulation-Based Optimization for CAR T-Cell Therapy Logistics Negar Sadeghi and Mohammad Dehghanimohammadabadi (Northeastern University) Program Track: Healthcare and Life Sciences Abstract AbstractDespite positive clinical outcomes of Chimeric Antigen Receptor T-cell therapy, its time-sensitivity causes substantial logistical challenges. This paper introduces a simulation-optimization (SO) framework to address both transportation mode selection and patient scheduling in CAR T-cell therapy supply chains. This framework combines a discrete-event simulation with a metaheuristic optimization to handle uncertainties in processing times and patient conditions. Experiments across different time-window constraints demonstrate that the developed SO model consistently outperforms traditional scheduling heuristics (FIFO, SPT, EDD) in total cost while maintaining timely delivery. This model provides a superior balance between transportation efficiency and delay minimization compared to rule-based methods. Results highlight the potential of simulation-based optimization to enhance personalized medicine delivery by improving cost-effectiveness without compromising treatment timeliness. pdf Open Source, Healthcare and Life SciencesSimulation for Epidemic Control and Policy Session Chair: Amir Abdollahi (Northeastern University) Targeted Household Quarantining: Enhancing the Efficiency of Epidemic Response Johannes Ponge (University of Münster), Julian Patzner (Martin Luther University Halle-Wittenberg), and Bernd Hellingrath and André Karch (University of Münster) Program Track: Healthcare and Life Sciences Program Tag: Open Source Abstract AbstractNon-pharmaceutical interventions (NPIs) are the immediate public health reaction to emerging epidemics. While they generally help slow down infection dynamics, they can be associated with relevant socioeconomic costs, like lost school- or work days caused by preemptive household quarantines. However, research suggests that not all households contribute equally to the overall infection dynamics. In this study, we introduce the novel “Infection Contribution” metric that allows us to trace the involvement of particular household types over entire infection chains. Building upon the German Epidemic Microsimulation System, we quantify the impact of various household types, considering their size and composition in a COVID-19-like scenario. Additionally, we show how targeting interventions based on household characteristics produces efficient strategies, outperforming non-selective strategies in almost all scenarios. Our approach can be transferred to other NPIs, such as school closure, testing, or contact tracing, and even inform the prioritization of vaccinations. pdfEvaluating Epidemic Scenarios with Agent-based Simulation: A Case Study from UK Public Health Workshop Maziar Ghorbani and Anastasia Anagnostou (Brunel University London); Arindam Saha (University College London); and Tasin Islam, Nura Tajjani Abubakar, Kate Mintram, Simon J. E. Taylor, and Derek Groen (Brunel University London) Program Track: Healthcare and Life Sciences Abstract AbstractThe growing complexity of public health emergencies requires modeling tools that are both scientifically robust and operationally scalable. As part of the EU Horizon 2020 STAMINA project, we deployed the Flu and Coronavirus Simulator (FACS), a geospatial agent-based model designed to simulate the spread of infectious diseases at local and regional levels. This paper presents a case study from a UK Public Health Workshop, where FACS supported the evaluation of epidemic response scenarios. We describe how FACS integrates demographic, spatial, and epidemiological data, and outline key enhancements, such as location-based parallelization and FabSim3-enabled automation, which enable large-scale simulation. We detail the scenario designs and outcomes, highlighting the intersection of simulation projections and intervention planning. Finally, we reflect on communicating results to stakeholders and bridging the gap between modeling and policy. This work demonstrates how geospatially grounded, scalable agent-based simulations can provide meaningful insights into regional intervention planning within operational timeframes. pdfA Two-Stage Simulation Framework for Evaluating AI Policy Recommendations: A Case Study of COVID-19 Amir Abdollahi, Reyhaneh Mohammadi, Jacqueline Griffin, and Casper Harteveld (Northeastern University) Program Track: Healthcare and Life Sciences Abstract AbstractAs AI integration in critical domains grows, evaluating its effectiveness in complex policy environments
remains challenging. We introduce a two-stage simulation framework for assessing AI policy recommendations in the COVID-19 pandemic. First, we train a deep reinforcement learning (DRL) agent using data from 186 countries to model optimal intervention timing and intensity. Results suggest the DRL agent outperforms average government outcomes within our simplified model under specific assumptions, reducing infections and fatalities, improving recovery rates. Second, we employ SEIRD (Susceptible-Exposed-Infected-Recovered-Dead) modeling to create a dynamic simulation environment, testing
the agent across diverse scenarios beyond historical data. Unlike prior work lacking systematic evaluation,
our framework provides a controlled testbed for high-stakes policy decisions before implementation. This
presents a responsible approach to AI evaluation where real-world experimentation raises ethical concerns.
It highlights the role of simulations in bridging development-deployment gaps while identifying financial
constraints and human-AI interaction as future research priorities. pdf
Hybrid Modeling and Simulation Track Coordinator - Hybrid Modeling and Simulation: Masoud Fakhimi (University of Surrey), Navonil Mustafee (University of Exeter, The Business School) AnyLogic, Complex Systems, DOE, Siemens Tecnomatix Plant Simulation, System Dynamics, Hybrid Modeling and SimulationHybrid M&S: Manufacturing and Digital Twins Session Chair: Martino Luis (University of Exeter) Hierarchical Hybrid Automata: A Theoretical Basis for the Digital Twin Approach Mamadou K. Traoré (University of Bordeaux) Program Track: Hybrid Modeling and Simulation Abstract AbstractThe Digital Twin (DT) concept has garnered significant interest in recent years, with its potential to improve system efficiency through data synchronization and model updates. Unlike traditional modeling techniques, DTs dynamically incorporate operational data to simulate "what-if" scenarios. However, the growing number of definitions and applications of DTs across industries highlights the need for a unified theoretical framework. This paper proposes a system-theoretic approach to formalize the concept of DT, aiming to contribute to a comprehensive DT theory. The paper introduces the DMSµ framework and explores the potential of hierarchical hybrid automata to offer within that framework a formalism that supports symbolic manipulation. It provides a foundation for advancing the formalization and application of DT technology across diverse fields. pdfHybrid Simulation-based Algorithm Tuning for Production Speed Management System as a Stand-alone Online Digital Twin Ahmad Attar, Martino Luis, and Tzu-Chun Lin (University of Exeter); Shuya Zhong (University of Bath); and Voicu Ion Sucala and Abdulaziz Alageel (University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: DOE, Siemens Tecnomatix Plant Simulation Abstract AbstractOne of the primary in-built components of smart, continuous manufacturing lines is the production speed management system (PSMS). In addition to being overly cautious, the decisions made in these systems may center on making local adjustments to the manufacturing process, indicating a major drawback of such systems that prevents them from acting as proper digital twins. This study delves into hybridizing the continuous and discrete event simulation, DOE, and V-graph methods to redefine PSMS’s internal decision algorithms and procedures, giving it an aerial perspective of the line and turning it into a stand-alone online digital twin with decisions at a system level. The proposed approach is applied to a practical case from the food and beverage industry to validate its effectiveness. Numerical results demonstrated an intelligent, dynamic balancing of the production line, a substantial increment in productivity, and up to 37.7% better resiliency against new failure and repair patterns. pdfA Hybrid Simulation-based Approach for Adaptive Production and Demand Management in Competitive Markets S. M. Atikur Rahman, Md Fashiar Rahman, and Tzu-Liang Bill Tseng (The University of Texas at El Paso) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractManaging production, inventory, and demand forecasting in a competitive market is challenging due to consumer behavior and market dynamics. Inefficient forecasting can lead to inadequate inventory, an interrupted production schedule, and eventually, less profit. This study presents a simulation-based decision support framework integrating discrete event simulation (DES) and system dynamics (SD). DES models production and inventory management to ensure optimized resource utilization, while SD is employed to incorporate market dynamics. This model jointly determines demand through purchase decisions from potential users and replacement demand from existing adopters. Further refinements prevent sales declines and sustain long-term market stability. This hybrid simulation approach provides insights into demand evolution and inventory optimization, aiding strategic decision-making. Finally, we propose and integrate a dynamic marketing strategy algorithm with the simulation model, which results in around 38% more demand growth than the existing demand curve. The proposed approach was validated through rigorous experimentation and optimization analysis. pdf AnyLogic, Complex Systems, System Dynamics, Hybrid Modeling and SimulationHybrid M&S: Transport Systems Session Chair: Traecy Elezi (Brunel University London) Incorporating Elevation in Traffic-Vehicle Co-Simulation: Issues, Impacts, and Solutions Guanhao Xu, Anye Zhou, Abhilasha Saroj, Chieh (Ross) Wang, Vivek Sujan, Michael O. Rodgers, and Jianfei Chen (Oak Ridge National Laboratory); Oriana Calderón (University of Tennessee, Knoxville); and Zejiang Wang (University of Texas at Dallas) Program Track: Hybrid Modeling and Simulation Abstract AbstractTraffic-vehicle co-simulation couples microscopic traffic simulation with full-body vehicle dynamics to assess system-level impacts on mobility, energy, and safety with greater realism. Incorporating elevation is critical for accurately modeling vehicle behavior and energy use, especially for gradient-sensitive vehicles such as electric and heavy-duty trucks. However, raw elevation data often contain noise, discontinuities, and inconsistencies. While such issues may be negligible in traditional traffic simulations, they significantly affect traffic-vehicle co-simulations where vehicle dynamics are sensitive to road grade variations. This paper investigates the impact of unprocessed elevation data on vehicle behavior and energy consumption using a 42-mile simulation along Interstate 81. To mitigate these effects arising from elevation data issues, we propose an elevation processing workflow to improve the realism and stability of traffic-vehicle co-simulation. Results show that the method effectively removes noise and abrupt elevation transitions while preserving roadway geometry. pdfA Conceptual Hybrid Simulation Approach for Advancing Safety in Connected Automated Vehicles Traecy Elezi, Anastasia Anagnostou, Fotios Spyridonis, George Ghinea, and Simon J. E. Taylor (Brunel University London) Program Track: Hybrid Modeling and Simulation Abstract AbstractEnsuring traffic safety remains a major challenge due to the complexity of traffic environments and the early stage of autonomous vehicle (AV) technology, despite their potential to significantly reduce accidents and enhance road safety. The Artificial Potential Field (APF) approach offers a promising solution by simulating how vehicles adjust their motion, speed, and interactions with surrounding vehicles to maintain safety. This paper aims to introduce a conceptual hybrid simulation using the APF implemented within a multi-agent framework. The objective is to evaluate the suitability of APF model for real-time safety applications across extended time periods and diverse traffic scenarios. This evaluation is conducted through a hybrid simulation approach to identify advantages and limitations compared to existing risk assessment methodologies. pdfIntegrating Decision Field Theory Within System Dynamics Framework For Modeling the Adoption Process of Ride Sourcing Services Best Contributed Theoretical Paper - Finalist Seunghan Lee (Mississippi State University) and Jee Eun Kang (University at Buffalo, SUNY) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, System Dynamics Abstract AbstractThe rise of ride-sourcing services has changed the transportation industry, reshaping urban mobility services. This paper presents an integrated framework of the adoption of ride-sourcing services and its impact on transportation markets using a combined approach of System Dynamics (SD) and Extended-Decision Field Theory (E-DFT). Drawing on data from ride-sourcing platforms such as Uber and Lyft, the study investigates the temporal dynamics and trends of ride-sourcing demand. SD modeling is employed to capture the complex interactions and feedback loops within the ride-sourcing ecosystem at system-level. The integration of System Dynamics and extended DFT allows for a more comprehensive and holistic modeling of the ride-sourcing market. It enables exploration of various scenarios and policy interventions, providing insights into the long-term behavior of the market and facilitating evidence-based decision-making by policymakers and industry stakeholders while accommodating individual users' decisions based on changing preferences and environments. pdf Hybrid Modeling and SimulationHybrid M&S: Healthcare Applications Session Chair: Joe Viana (Norwegian University of Science and Technology, St. Olav’s Hospital) Hybrid Integrated Care Modeling Framework to Improve Patient Outcomes and System Performance Joe Viana and Anders N. Gulllhav (Norwegian University of Science and Technology, St. Olav’s Hospital); Heidi C. Dreyer, Alexander L. Hagen, Marte D-Q Holmemo, Aud U. Obstfelder, and Hanne M. Rostad (Norwegian University of Science and Technology); and Øystein Døhl (Trondheim Municipality, Norwegian University of Science and Technology) Program Track: Hybrid Modeling and Simulation Abstract AbstractThe healthcare sector faces rising demands from aging populations, complex patient needs, and workforce shortages, requiring innovative solutions for resource and service coordination. In response, Trondheim municipality and St. Olavs hospital are developing integrated planning tools within the Research Council of Norway funded HARMONI project. This transdisciplinary initiative aims to bridge the widening gap between service demand and capacity through planning tools that offer an overview of patient flows and resource capacities. The tools will support comprehensive planning, process adjustments, and capacity dimensioning, fostering collaboration to prevent patient queues and unnecessary transfers. We present three ongoing hybrid model case studies focusing on patient flow between primary and specialist health services. These cases provide opportunities to develop hybrid simulation verification and validation techniques. pdfDecision Tree Framework for Selecting Evidence-based Colorectal Cancer Screening Interventions Using Metamodels Ashley Stanfield and Maria Mayorga (North Carolina State University), Meghan O'Leary (University of North Carolina), and Kristen Hassmiller Lich (University of North Carolina at Chapel Hill) Program Track: Hybrid Modeling and Simulation Abstract AbstractColorectal cancer (CRC) is the third leading cause of cancer-related death in the U.S., despite being largely preventable through screening. Interventions such as mailing fecal immunochemical tests (FIT) or sending patient reminders have shown varying success in increasing screening rates. Simulation modeling has played a key role in estimating the impact of these interventions on long-term health outcomes by representing the natural history of CRC. Using a simulation model of CRC, in this work, we developed a metamodeling-based decision tree to help clinics and health systems select CRC screening interventions that best match their population. Our approach uses estimates of intervention effectiveness based on pre-intervention screening levels, eliminating the need for users to assume how an intervention will impact outcomes. By tailoring recommendations to population characteristics and baseline screening rates, the decision tree supports data-driven decisions to improve CRC screening and, ultimately, population health. pdfHybrid Agent-based And System Dynamics Modeling of Antibacterial Resistance in Community Avi Roy Chowdhury (Indian Institute of Technology Bombay), Jayendran Venkateswaran (IIT Bombay), and Om Damani (Indian Institute of Technology Bombay) Program Track: Hybrid Modeling and Simulation Abstract AbstractAntibiotic resistance (ABR) is a major global health threat, contributing to increased mortality and economic losses. The emergence and spread of ABR is driven by healthcare practices, environmental contamination, and human-to-human transmission. In low and middle income countries (LMICs), limited healthcare infrastructure and environmental factors, such as contaminated water and poor sanitation, exacerbate the situation. In these regions, inappropriate antibiotic use and insufficient infection control measures further promote resistance. This paper presents a hybrid model that combines System Dynamics (SD) and Agent-Based Modeling (ABM) to explore complex interactions between healthcare systems, environmental factors, and human behavior in community settings. The SD approach models aggregated within-host bacterial dynamics and external environmental factors, while ABM captures individual person behaviors community interactions and interactions with healthcare. By integrating these methods, this study offers a more comprehensive framework for understanding the emergence of ABR in LMICs. Preliminary results and future directions are discussed. pdf Hybrid Modeling and SimulationHybrid M&S: AI and Interoperability Session Chair: Niclas Feldkamp (Ilmenau University of Technology) On the Use of Generative AI in Simulation Studies: A Review of Techniques, Applications and Opportunities Niclas Feldkamp (Ilmenau University of Technology) Program Track: Hybrid Modeling and Simulation Abstract AbstractAlthough Large Language Models get a lot of attention, Generative Artificial Intelligence encompasses a variety of methods such as Generative Adversarial Networks, Variational Autoencoders or Diffusion Models, that all work very differently but are all capable of generating synthetic data. These methods have considerable potential to make simulation studies more efficient, especially through the creation of artificial data sets, automatic model parameterization and assisted result analysis. The aim of this study is to systematically classify generative methods and their applicability in the context of simulation studies. Based on a comprehensive literature review, applications, trends and challenges of generative methods that are used in combination with simulation are analyzed and structured. This is then summarized in a conceptual workflow that shows how and in which phase generative methods can be used advantageously in simulation studies. pdfOntology Enabled Hybrid Modeling and Simulation John Beverley (University at Buffalo) and Andreas Tolk (The MITRE Corporation) Program Track: Hybrid Modeling and Simulation Abstract AbstractWe explore the role of ontologies in enhancing hybrid modeling and simulation through improved semantic rigor, model reusability, and interoperability across systems, disciplines, and tools. By distinguishing between methodological and referential ontologies, we demonstrate how these complementary approaches address interoperability challenges along three axes: Human–Human, Human–Machine, and Machine–Machine. Techniques such as competency questions, ontology design patterns, and layered strategies are highlighted for promoting shared understanding and formal precision. Integrating ontologies with Semantic Web Technologies, we showcase their dual role as descriptive domain representations and prescriptive guides for simulation construction. Four application cases – sea-level rise analysis, Industry 4.0 modeling, artificial societies for policy support, and cyber threat evaluation – illustrate the practical benefits of ontology-driven hybrid simulation workflows. We conclude by discussing challenges and opportunities in ontology-based hybrid M&S, including tool integration, semantic alignment, and support for explainable AI. pdfFrom Over-reliance to Smart Integration: using Large-Language Models as Translators between Specialized Modeling and Simulation Tools Philippe J. Giabbanelli (Old Dominion University), John Beverley (University at Buffalo), Istvan David (McMaster University), and Andreas Tolk (The MITRE Corporation) Program Track: Hybrid Modeling and Simulation Abstract AbstractLarge Language Models (LLMs) offer transformative potential for Modeling & Simulation (M&S) through natural language interfaces that simplify workflows. However, over-reliance risks compromising quality due to ambiguities, logical shortcuts, and hallucinations. This paper advocates integrating LLMs as middleware or translators between specialized tools to mitigate complexity in M&S tasks. Acting as translators, LLMs can enhance interoperability across multi-formalism, multi-semantics, and multi-paradigm systems. We address two key challenges: identifying appropriate languages and tools for modeling and simulation tasks, and developing efficient software architectures that integrate LLMs without performance bottlenecks.
To this end, the paper explores LLM-mediated workflows, emphasizes structured tool integration, and recommends Low-Rank Adaptation-based architectures for efficient task-specific adaptations. This approach ensures LLMs complement rather than replace specialized tools, fostering high-quality, reliable M&S processes. pdf AnyLogic, Complex Systems, Conceptual Modeling, Hybrid Modeling and SimulationHybrid M&S: Environment and Society Session Chair: Okechukwu Okorie (The University of Manchester, University of Exeter) Conceptual Hybrid Modelling Framework Facilitating Scope 3 Carbon Emissions Evaluation for High Value Manufacturing Okechukwu Okorie, Victoria Omeire, Paul Mativenga, and Maria Sharmina (The University of Manchester) and Peter Hopkinson (The University of Exeter) Program Track: Hybrid Modeling and Simulation Program Tags: AnyLogic, Complex Systems, Conceptual Modeling Abstract AbstractExisting manufacturing research on greenhouse gas emissions often focuses on Scope 1 and Scope 2 emissions and underestimates Scope 3 emissions, which are indirect emissions from a firm’s value chains, city and region consumption. Traditional methodologies for evaluating carbon emissions are limited for Scope 3 emissions, due to the complexity of manufacturing supply chains and lack of quality data, leading to incomplete carbon accounting and potential double-counting. This challenge is pronounced for high value manufacturing, an emergent manufacturing perspective, due to the complexity of its supply chain network. This study develops a comprehensive hybrid modeling framework for evaluating Scope 3 emissions at product level, useful for manufacturers and modelers. pdfModeling CO2 Spread in Indoor Environments Marina Murillo and Gabriel Wainer (Carleton University) Program Track: Hybrid Modeling and Simulation Abstract AbstractWe introduce a hybrid modeling approach for simulating carbon dioxide (CO2) dispersion in indoor environments by integrating Cellular Automata (CA), Discrete Event System Specification (DEVS), and agent-based modeling. The proposed framework enhances traditional models by incorporating dynamic CO2 generators, random-walk algorithms, and CO2 sinks. We show how the method can be used to examine the effects that room layouts, occupant movement, ventilation settings, and CO2 sinks and sources placement have on indoor concentration patterns. The approach presented here enables the exploration of various configuration parameters and provides a flexible and scalable tool for understanding CO2 diffusion. pdfHybrid Simulation of Socio-economic Systems: Deferred Penalty for Maternity in Future Retirement Bożena Mielczarek and Maria Hajłasz (Wroclaw University of Science and Technology) Program Track: Hybrid Modeling and Simulation Abstract AbstractThe paper deals with the application of hybrid simulation in the field of socio-economic systems. We present a model for assessing the extent to which interruptions in work during a woman's professional career affect the amount of her future pension. Breaks in work due to the childbirths, their upbringing, as well as shorter breaks due to emergency care of sick children significantly affect the amount of the first pension collected in defined contribution pension systems. The model is a hybrid of a demographic model developed according to the system dynamics approach and a pension model built with discrete event simulation paradigm. The experiments examine the size of the maternity penalty for different career scenarios and long-term demographic changes. The primary research objective was to examine the extent to which women's individual life-course choices affect their pension outcomes, in interaction with systemic features of pension schemes and projected demographic changes. pdf
Track Coordinator - Introductory Tutorials: Chang-Han Rhee (Northwestern University), Antuela Tako (Nottingham Trent University) Open Source, Introductory TutorialsA Tutorial on Resource Modeling Using the Kotlin Simulation Library Session Chair: Sujee Lee (Sungkyunkwan University) A Tutorial on Resource Modeling Using the Kotlin Simulation Library Manuel D. Rossetti (University of Arkansas) Program Track: Introductory Tutorials Program Tag: Open Source Abstract AbstractThe Kotlin Simulation Library (KSL) is an open-source library written in the Kotlin programming language that facilitates Monte Carlo and discrete-event simulation modeling. The library provides an API framework for developing, executing, and analyzing models using both the event view and the process view modeling perspectives. This paper provides a tutorial on modeling with resources within simulation models. The KSL will be utilized to illustrate important concepts that every simulation modeler should understand within the context of modeling resources within a simulation model. A general discussion of resource modeling concepts is presented. Then, examples are used to illustrate how to put the concepts into practice. While the concepts will be presented within the context of the KSL, the ideas should be important to users of other simulation languages. This tutorial provides both an overview of resource modeling constructs within the KSL and presents tutorial examples. pdf Introductory TutorialsTutorial: Concepts of Conceptual Modeling Session Chair: Christoph Kogler (Institute of Production and Logistics; University of Natural Resources and Life Sciences, Vienna) Tutorial: Concepts of Conceptual Modeling Stewart Robinson (Newcastle University) Program Track: Introductory Tutorials Abstract AbstractConceptual modeling entails the abstraction of a simulation model from the system of interest to create a simplified representation of the real world. This activity is vital to successful simulation modeling, but it is not well understood. In this tutorial we aim to develop a better understanding of conceptual modeling by exploring this activity from various perspectives. Our initial focus is on defining both a conceptual model and the activity of conceptual modeling. The activity is further explored from a practice-based perspective with reference to an example that is based on actual events. Conceptual modeling is then discussed from three further perspectives: understanding the relationship between a simulation model’s accuracy and its complexity; the role of assumptions and simplifications in modeling; and frameworks for guiding the activity of conceptual modeling. This exploration of the concepts of conceptual modeling provides the underlying knowledge required for improving our conceptual models for simulation. pdf Introductory TutorialsA Tutorial on Generative AI and Simulation Modeling Integration Session Chair: Konstantinos Ziliaskopoulos (Auburn University) A Tutorial on Generative AI and Simulation Modeling Integration Mohammad Dehghanimohammadabadi, Sahil Belsare, and NEGAR Sadeghi (Northeastern University) Program Track: Introductory Tutorials Abstract AbstractThis tutorial paper explores the applicability of Generative AI (GenAI), particularly Large Language Models (LLMs), within the context of simulation modeling. The discussion is organized around three key perspectives: Generation, where GenAI is used to create simulation models and input data; Execution, where it supports real-time decision-making and adaptive logic during simulation runs; and Analysis, where GenAI assists in running experiments, interpreting results, and generating insightful reports. In addition to these core perspectives, the paper also covers practical implementation considerations such as prompt engineering, fine-tuning, and Retrieval-Augmented Generation (RAG). The tutorial offers both conceptual guidance and hands-on examples to support researchers and practitioners seeking to integrate GenAI into simulation environments. pdf Introductory TutorialsA Tutorial on Data-Driven Petri Net Model Extraction and Simulation for Digital Twins in Smart Manufacturing Session Chair: Hridyanshu Aatreya (Brunel University London) A Tutorial on Data-Driven Petri Net Model Extraction and Simulation for Digital Twins in Smart Manufacturing Atieh Khodadadi (Karlsruhe Institute of Technology); Ashkan Zare (University of Southern Denmark); Michelle Jungmann and Manuel Götz (Karlsruhe Institute of Technology); and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Introductory Tutorials Abstract AbstractThe adoption of data-driven Digital Twins in smart manufacturing systems necessitates robust, data-driven modeling techniques. Stochastic Petri Nets (SPNs) offer a formal framework for capturing concurrency and synchronization in discrete event systems, making them well-suited for modeling of smart manufacturing systems. This tutorial provides a hands-on introduction to extracting SPN models from system event logs. Using our Python-based SPN library (PySPN), we guide participants through generating ground-truth models in Petri Net Markup Language (PNML), simulating SPNs for generating event logs, and applying Process Mining techniques for automated SPN extraction. Two case studies illustrate the end-to-end workflow, including SPN model validation for Digital Twins. By the end of the tutorial, participants will gain practical skills in data-driven SPN modeling for Digital Twins applications in manufacturing. pdf Introductory TutorialsSecrets of Successful Simulation Studies Session Chair: Manuel D. Rossetti (University of Arkansas) Secrets of Successful Simulation Studies AVERILL LAW (Averill M. Law & Associates) Program Track: Introductory Tutorials Abstract AbstractIn this tutorial we give a definitive and comprehensive 10-step approach for conducting a successful simulation study. Topics to be discussed include problem formulation, collection and analysis of data, developing a valid and credible model, modeling sources of system randomness, design and analysis of simulation experiments, model documentation, and project management. pdf Introductory TutorialsFrom Digital Twins to Twinning Systems Session Chair: Michelle Jungmann (Karlsruhe Institute of Technology) From Digital Twins to Twinning Systems Giovanni Lugaresi (KU Leuven) and Hans Vangheluwe (University of Antwerp, Flanders Make) Program Track: Introductory Tutorials Abstract AbstractDigital twins are rapidly emerging as a disruptive innovation in industry, offering a dynamic integration of physical systems with their virtual counterparts through data-driven and real-time synchronization. In this work, we explore the foundational principles, architectures, and life cycle of digital twins, with a particular emphasis on their use in simulation-based decision-making. We dissect the core components of a digital twin and examine how these elements interact across a system’s life cycle. The notion of a twinning system is introduced as a unifying framework for sensing, simulation and digital models/shadows/twins, with applications in diverse fields. We outline the most significant choices that stakeholders must make. A case-study is based on a lab-scale manufacturing system and illustrates how DTs are constructed, validated, and used for scenario analysis and autonomous decision-making. We also discuss the challenges associated with synchronization, model validity, and the development of internal services for operational use. pdf Neural Networks, Python, Supply Chain, Introductory TutorialsBridging the Gap: A Practical Guide to Implementing Deep Reinforcement Learning Simulation in Operations Research with Gymnasium Session Chair: Traecy Elezi (Brunel University London) Bridging the Gap: A Practical Guide to Implementing Deep Reinforcement Learning Simulation in Operations Research with Gymnasium Konstantinos Ziliaskopoulos, Alexander Vinel, and Alice E. Smith (Auburn University) Program Track: Introductory Tutorials Program Tags: Neural Networks, Python, Supply Chain Abstract AbstractDeep Reinforcement Learning (DRL) has shown considerable promise in addressing complex sequential decision-making tasks across various fields, yet its integration within Operations Research (OR) remains limited despite clear methodological compatibility. This paper serves as a practical tutorial aimed at bridging this gap, specifically guiding simulation practitioners and researchers through the process of developing DRL environments using Python and the Gymnasium library. We outline the alignment between traditional simulation model components, such as state and action spaces, objective functions, and constraints, and their DRL counterparts. Using an inventory control scenario as an illustrative example, which is also available online through our GitHub repository, we detail the steps involved in designing, implementing, and integrating custom DRL environments with contemporary DRL algorithms. pdf
Logistics, Supply Chain Management, Transportation Track Coordinator - Logistics, Supply Chains, Transportation: Montasir Abbas (Virginia Tech), Majsa Ammouriova (German Jordanian University, Universitat Oberta de Catalunya), Dave Goldsman (Georgia Institute of Technology), Markus Rabe (MB / ITPL, TU Dortmund University) Distributed, DOE, JMP, Logistics, Supply Chain Management, TransportationTraffic Simulation Session Chair: Irene Izco (Public University of Navarre, Institute of Smart Cities) Unfolding Diffusive and Refinement Phases Of Heterogeneous Performance-Aware Re-Partitioning for Distributed Traffic Simulation Anibal Siguenza-Torres (Technical University of Munich); Alexander Wieder, Stefano Bortoli, and Margherita Grossi (Huawei Munich Research Center); Wentong Cai (Nanyang Technological University); and Alois Knoll (Technical University of Munich) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: Distributed Abstract AbstractThis work presents substantial improvements to Enhance, a recent approach for graph
partitioning in large-scale distributed microscopic traffic simulations, particularly in challenging
load-balancing scenarios within heterogeneous computing environments. With a thorough analysis of the diffusive and refinement phases of the Enhance algorithm, we identified orthogonal opportunities for optimizations that markedly improved the quality of the generated partitionings. We validated these improvements using synthetic scenarios, achieving up to a 46.5\% reduction in estimated runtime compared to the original algorithm and providing sound reasoning and intuitions to explain the nature and magnitude of the improvements. Finally, we show experimentally that the performance gains observed in the synthetic scenario partially translate into performance gains in the real system. pdfCalibrating Driver Aggression Parameters in Microscopic Simulation using Safety-Surrogate Measures David Hong and Montasir Abbas (Virginia Tech) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: DOE, JMP Abstract AbstractThis research aimed to develop a methodology and a framework to calibrate microscopic simulation models driving behaviors to reproduce safety conflicts observed in real-world environments. The Intelligent Driver Model (IDM) was selected as the car-following algorithm to be utilized in the External Driver Model (EDM) Application Programming Interface (API) in VISSIM to better represent real-world driving behavior. The calibration method starts with an experiment design in the statistical software JMP Pro 16, that provided 84 simulation runs, each with a distinct combination of the 11 EDM input variables. After 84 runs with such variables, the traffic trajectory was analyzed by the FHWA’s Surrogate Safety Assessment Model (SSAM) to generate crossing, rear-end, and lane change conflict counts. It is concluded that the calibration method proposed can closely match the conflict counts translated from real-world conditions. pdfAgent-based Modeling and Simulation of Battery Dynamics in Electric Delivery Vehicles under Realistic Urban Scenarios Irene Izco, Anas Al-Rahamneh, Adrian Serrano-Hernandez, and Javier Faulin (Public University of Navarre; GILT Group, Institute of Smart Cities) Program Track: Logistics, Supply Chain Management, Transportation Abstract AbstractElectric delivery vehicles are becoming increasingly common in last-mile logistics. Thus, the shift to electric transport introduces new challenges to urban transportation management, as these vehicles are range-constrained under heavy loads and require charging infrastructures on a daily basis. Battery systems constitute the most critical component of these vehicles and understanding their energy consumption is essential. This paper presents a simulation-based framework to evaluate battery performance under realistic delivery scenarios, focusing on how operational variables, such as speed, payload, and travel distance affect power consumption. Hence, an integrated agent-based simulation model is proposed, incorporating an equivalent electric circuit model of the battery, and a mechanical model of the vehicle energy requirements. Moreover, ground-based vehicles are considered for experiments. Two battery configurations are used in the simulated scenarios, representing small and medium-sized battery systems. Similarly, short and long delivery times are considered to evaluate the impact of battery size on distribution. pdf AnyLogic, Distributed, Java, Simio, System Dynamics, Aviation Modeling and Analysis, Logistics, Supply Chain Management, TransportationAdvances in Aviation Modeling and Simulation Session Chair: Hauke Stolz (Technical University of Hamburg) A Synergistic Approach to Workforce Optimization in Airport Screening using Machine Learning and Discrete-Event Simulation Lauren A. Cravy and Eduardo Perez (Texas State University) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: Simio Abstract AbstractThis study explores the integration of machine learning (ML) clustering techniques into a simulation-optimization framework aimed at enhancing the efficiency of airport security checkpoints. Simulation-optimization is particularly suited for addressing problems characterized by evolving data uncertainties, necessitating critical system decisions before the complete data stream is observed. This scenario is prevalent in airport security, where passenger arrival times are unpredictable, and resource allocation must be planned in advance. Despite its suitability, simulation-optimization is computationally intensive, limiting its practicality for real-time decision-making. This research hypothesizes that incorporating ML clustering techniques into the simulation-optimization framework can significantly reduce computational time. A comprehensive computational study is conducted to evaluate the performance of various ML clustering techniques, identifying the OPTICS method as the best found approach. By incorporating ML clustering methods, specifically the OPTICS technique, the framework significantly reduces computational time while maintaining high-quality solutions for resource allocation. pdfDevelopment of a Library of Modular Components to Accelerate Material Flow Simulation in the Aviation Industry Hauke Stolz, Philipp Braun, and Hendrik Rose (Technical University of Hamburg) and Helge Fromm and Sascha Stebner (Airbus Group) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Java Abstract AbstractAircraft manufacturing presents significant challenges for logistics departments due to the complexity of processes and technology, as well as the high variety of parts that must be handled. To support the development and optimization of these complex logistics processes in the aviation industry, simulation is commonly employed. However, existing simulation models are typically tailored to specific use cases. Reusing or adapting these models for other aircraft-specific applications often requires substantial implementation and
validation efforts. As a result, there is a need for flexible and easily adaptable simulation models. This work aims to address this challenge by developing a modular library for logistics processes in aircraft manufacturing. The outcome of this work highlights the simplifications introduced by the developed library and its application in a real aviation warehouse. pdfUsing the Tool Command Language for a Flight Simulation Flight Dynamics Model Frank Morlang (Private Person) and Steffen Strassburger (Ilmenau University of Technology) Program Track: Aviation Modeling and Analysis Program Tags: Distributed, System Dynamics Abstract AbstractThis paper introduces a methodology for simulating flight dynamics utilizing the Tool Command Language (Tcl). Tcl, created by John Ousterhout, was conceived as an embeddable scripting language for an experimental Computer Aided Design (CAD) system. Tcl, a mature and maturing language recognized for its simplicity, versatility, and extensibility, is a compelling contender for the integration of flight dynamics functionalities. The work presents an extension method utilizing Tcl's adaptability for a novel type of flight simulation programming. Initial test findings demonstrate performance appropriate for the creation of human-in-the-loop real-time flight simulations. The possibility for efficient and precise modeling of future complicated distributed simulation elements is discussed, and recommendations regarding subsequent development priorities are drawn. pdf AnyLogic, Emergent Behavior, Input Modeling, Supply Chain, Logistics, Supply Chain Management, TransportationSustainability in Logistics Session Chair: Lasse Jurgeleit (Technische Universität Dortmund) An Agent-Based Framework for Sustainable Perishable Food Supply Chains Maram Shqair (Auburn University); Karam Sweis, Haya Dawkassab, and Safwan Altarazi (German Jordanian University); and Konstantinos Mykoniatis (Auburn University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Input Modeling, Supply Chain Abstract AbstractThis study presents an agent-based modeling framework for enhancing the efficiency and sustainability of perishable food supply chains. The framework integrates forward logistics redesign, reverse logistics, and waste valorization into a spatially explicit simulation environment. It is applied to the tomato supply chain in Jordan, restructuring the centralized market configuration into a decentralized closed loop system with collection points, regional hubs, and biogas units. The model simulates transportation flows, agent interactions, and waste return through retailer backhauls. Simulation results show a 31.1 percent reduction in annual transportation distance and cost, and a 35.9 percent decrease in transportation cost per ton. The proposed approach supports cost-effective logistics and a more equitable distribution of transport burden, particularly by shifting a greater share to retailers. Its modular structure, combined with reliance on synthetic data and scenario flexibility, makes it suitable for evaluating strategies in fragmented, resource-constrained supply chains. pdfIdentification of Spatial Energy Demand Shift Flexibilities of EV Charging on Regional Level Through Agent-Based Simulation Paul Benz and Marco Pruckner (University of Würzburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Emergent Behavior Abstract AbstractOpen access to electric vehicle charging session data is limited to a small selection provided by operators of mostly public or workplace chargers. This restriction poses a hurdle in research on regional energy demand shift flexibilities enabled by smart charging, since usage characteristics between different charging options remain hidden. In this paper, we present an agent-based simulation model with parameterizable availability and usage preferences of public and private charging infrastructure to access insights of charging behavior otherwise only visible through proprietary data. Thus, we enable utility operators to estimate spatial charging energy distribution and support the integration of renewable energy by showing potentials for smart charging. In a first application, we point out how increased access and use of private charging facilities can lead to additional energy demand in rural municipalities, which, in turn, leads to a lower grid load in urban centers. pdfCombining Optimization and Automatic Simulation Model Generation for Less-Than-Truckload Terminals Lasse Jurgeleit, Patrick Buhle, Maximilian Mowe, and Uwe Clausen (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tag: AnyLogic Abstract AbstractRecent advances allow Less-Than-Truckload (LTL) terminals to know the distribution of arriving goods on inbound trucks in advance. Therefore, assigning docks to inbound and outbound relations and trucks to docks is a critical problem for terminal operators. This paper introduces a framework combining automatic model generation and optimization. The approach aims to allow testing of suggestions from multiple optimization algorithms. The relevance and feasibility of this approach in finding an appropriate optimization algorithm for a given system are demonstrated through a simplified case study of a variation of the dock assignment problem. This paper demonstrates how such a combination can be constructed and how the methods can effectively complement each other, using the example of LTL terminals. pdf AnyLogic, Data Analytics, Python, Siemens Tecnomatix Plant Simulation, Supply Chain, Logistics, Supply Chain Management, TransportationSimulation to Support Planning Session Chair: Gabriel Thurow (Otto-von-Guericke-University Magdeburg) Sales Planning Using Data Farming in Trading Networks Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) and Alexander Wuttke, Markus Rabe, Hendrik van der Valk, and Mario di Benedetto (TU Dortmund University) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: AnyLogic, Data Analytics, Python Abstract AbstractVolatile customer demand poses a significant challenge for the logistics networks of trading companies. To mitigate the uncertainty in future customer demand, many products are produced to stock with the goal to be able to meet the customers’ expectations. To adequately manage their product inventory, demand forecasting is a major concern in the companies’ sales planning. A promising approach besides using observational data as an input for the forecasting methods is simulation-based data generation, called data farming. In this paper, purposeful data generation and large-scale experiments are applied to generate input data for predicting customer demand in sales planning of a trading company. An approach is presented for using data farming in combination with established forecasting methods such as random forests. The application is discussed on a real-world use case, highlighting benefits of the chosen approach, and providing useful and value-adding insights to motivate further research. pdfSimulation-based Production Planning For An Electronic Manufacturing Service Provider Using Collaborative Planning Gabriel Thurow, Benjamin Rolf, Tobias Reggelin, and Sebastian Lang (Otto-von-Guericke-University Magdeburg) Program Track: Logistics, Supply Chain Management, Transportation Program Tags: Siemens Tecnomatix Plant Simulation, Supply Chain Abstract AbstractThe growing trend of specialization is significantly increasing the importance of Electronic Manufacturing Services (EMS) providers. Typically, EMS companies operate within global supply networks characterized by high complexity and dynamic interactions between multiple stakeholders. As a consequence, EMS providers frequently experience volatile and opaque procurement and production planning processes. This paper investigates the potential of collaborative planning between EMS providers and their customers to address these challenges. Using discrete-event simulation, we compare traditional isolated planning approaches with collaborative planning strategies. Based on empirical data from an EMS company, our findings highlight the benefits of collaborative planning, particularly in improving inventory management and service levels for EMS providers. We conclude by presenting recommendations for practical implementation of collaborative planning in the EMS industry. pdf Logistics, Supply Chain Management, TransportationSimulation and Optimization Session Chair: Christoph Kogler (Institute of Production and Logistics; University of Natural Resources and Life Sciences, Vienna)
Manufacturing and Industry 4.0 Track Coordinator - Manufacturing and Industry 4.0: Alp Akcay (Northeastern University), Christoph Laroque (University of Applied Sciences Zwickau), Guodong Shao (National Institute of Standards and Technology) Manufacturing and Industry 4.0Manufacturing Intralogistics Session Chair: Deogratias Kibira (National Institute of Standards and Technology, University of Maryland) From Coordination to Efficiency: Boosting Smart Robot Use via Hybrid Intralogistics Concepts Bilgenur ERDOGAN, Quang-Vinh Dang, Mehrdad Mohammadi, and Ivo Adan (Eindhoven University of Technology) Program Track: Manufacturing and Industry 4.0 Abstract AbstractAutomated guided vehicles (AGVs) and Autonomous mobile robots (AMRs) are widely used in intralogistics for material delivery and product transport. As manufacturing evolves with Industry 5.0, the integration of autonomous systems, particularly in complex shop floor layouts, plays a crucial role in improving efficiency and reducing human intervention. This study explores various intralogistics concepts for AGVs and AMRs collaborating on an assembly line. We define an online dispatching rule for AGVs and benchmark it against previous implementations on high-mix-low-volume production. All proposed concept-related decisions are analyzed via a simulation model under a real-world case study, with sensitivity analyses varying system parameters. We observe an improvement of up to 51% in the hourly throughput rate and a significant drop in AGV utilization rates of up to 88% compared to benchmark instances. The results also show that pooling AGV/AMR resources achieves a higher throughput rate, and consequently reduces investment costs. pdfSimulation Study Analyzing the Generalization of a Multi-Agent Reinforcement Learning Approach to Control Gantry Robots Horst Zisgen and Jannik Hinrichs (Darmstadt University of Applied Science) Program Track: Manufacturing and Industry 4.0 Abstract AbstractIndustry 4.0 forces a significant transition in the field of manufacturing and production planning and control. A corner stone of Industry 4.0 scenarios is the ability of control systems to adapt autonomously to changes on the shop floor. Reinforcement Learning is considered as an approach to achieve this target. Consequently, the control strategies of the agents trained by Reinforcement Learning need to generalize in a manner that the agents are able to control modified production systems they are not directly trained for. This paper presents an evaluation of the generalization properties of a decentralized multi-agent Reinforcement Learning algorithm for controlling flow shops using complex gantry robot systems for automated material handling. It is shown that the corresponding agents are able to cope with variations of the production and gantry robot system, as long as these variations are within realistic boundaries, and thus are suitable for Industry 4.0 scenarios. pdfDecision Support Systems in Production Logistics: An Analytical Literature Review Katharina Langenbach and Markus Rabe (TU Dortmund University) and Christin Schumacher (AutoSchedule) Program Track: Manufacturing and Industry 4.0 Abstract AbstractDecision Support Systems (DSS) are a crucial component in production logistics, aiding companies in solving complex decision problems with multiple influences. This publication provides a structured review of the literature on the application of DSS in production logistics, focusing on the applied methods for decision support, such as optimization or Artificial Intelligence. The analysis considers scientific publications from 2015 to 2024, including industry use cases. Data analysis of categorizations of DSS is used. The findings highlight trends and limitations of current DSS application cases from the literature. Optimization methods, particularly heuristic and metaheuristic, are the most commonly employed decision support methods, followed by simulation. Despite the increased interest in AI technologies, their role in DSS for production logistics remains secondary. Like simulation methods, AI technologies are highly relevant when combined with optimization methods. The study provides a foundation for future research and practical advancements in decision support for manufacturing environments. pdf Conceptual Modeling, Data Driven, FlexSim, Manufacturing and Industry 4.0Reinforcement Learning for Production Scheduling Session Chair: Christin Schumacher (AutoSchedule) Reinforcement Learning in a Digital Twin for Galvano Hoist Scheduling Marvin Carl May (Massachusetts Institute of Technology), Louis Schäfer (adesso SE), and Jan-Philipp Kaiser (Karlsruhe Institute of Technology) Program Track: Manufacturing and Industry 4.0 Abstract AbstractReinforcement Learning (RL) has evolved as a dominant AI method to move towards optimal control of complex systems. In the domain of manufacturing, production control has emerged as one of the major application areas, where the material flow is governed by an RL agent that is fed with the real-time information flow of the system through a digital twin. A digital twin framework facilitates efficient and effective RL training. The coordination task performed in discrete, flexible manufacturing offers multiple decisions that cannot be found in all cases. Hoist scheduling for galvanic equipment introduces additional constraints as parts cannot be left inside a galvanic bath arbitrarily long. Even short deviations critically affect product quality, which is even more complicated in the mix high volume environments. The proposed RL agent learns superior control compared to the state-of-the-art and simple heuristic rules are derived for everyday application in the absence of digital twins. pdfA Reinforcement Learning-Based Discrete Event Simulation Approach For Streamlining Job-Shop Production Line Under Uncertainty Jia-Min Chen, Bimal Nepal, and Amarnath Banerjee (Texas A&M University) Program Track: Manufacturing and Industry 4.0 Program Tags: Data Driven, FlexSim Abstract AbstractStreamlining the order release strategy for a job-shop production system under uncertainty is a complex problem. The system is likely to have a number of stochastic parameters contributing to the problem complexity. These factors make it challenging to develop optimal job-shop schedules. This paper presents a Reinforcement Learning-based discrete-event simulation approach that streamlines the policy for releasing orders in a job-shop production line under uncertainty. A digital twin (DT) was developed to simulate the job-shop production line, which facilitated the collection of process and equipment data. A reinforcement learning algorithm is connected to the DT environment and trains with the previously collected data. Once the training is complete, its solution is evaluated in the DT using experimental runs. The method is compared with a few popular heuristic-based rules. The experimental results show that the proposed method is effective in streamlining the order release in a job-shop production system with uncertainty. pdfReinforcement Learning in Production Planning and Control: a Review on State, Action and Reward Design in Order Release and Production Scheduling Patrick Farwick (University of Applied Science Bielefeld) and Christian Schwede (University of Applied Science Bielefeld, Fraunhofer Institute of Software and Systems Engineering) Program Track: Manufacturing and Industry 4.0 Program Tag: Conceptual Modeling Abstract AbstractProduction Planning and Control (PPC) faces increasing complexity due to volatile demand, high product variety, and dynamic shop floor conditions. Reinforcement Learning (RL) offers adaptive decision-making capabilities to address these challenges. RL often relies on simulation environments for the intensive training, allowing for short run times during execution. This paper reviews existing literature to examine how RL agents are modeled in terms of state space, action space, and reward function, focusing on order release and related production scheduling tasks. The findings reveal considerable variation in modeling approaches and a lack of theoretical guidance, particularly in reward design and feature selection. pdf Manufacturing and Industry 4.0Panel: Panel on Future of Simulation in Manufacturing Session Chair: Alp Akcay (Northeastern University) Shaping Tomorrow's Factories: A Panel on Simulation-Driven Manufacturing Alp Akcay (Northeastern University), Christoph Laroque (University of Applied Sciences Zwickau), Robert Rencher (The Boeing Company), Guodong Shao (NIST), Reha Uzsoy (North Carolina State University), and Nienke Valkhoff (InControl Enterprise Dynamics) Program Track: Manufacturing and Industry 4.0 Abstract AbstractSimulation has been an indispensable tool for the design, analysis, and control of manufacturing systems for decades. With new digital twinning technologies and artificial intelligence capabilities appearing in a fast pace, will simulation -- as we know today -- retain its prominent role in the manufacturing industry of the future? What are the current and projected trends in manufacturing industry that will make simulation even more relevant? How will simulation evolve to address the needs of next-generation factories? Motivated by these initial questions, the Manufacturing and Industry 4.0 track of 2025 Winter Simulation Conference (WSC) brings together a panel of experts to discuss the future of simulation in manufacturing. pdf Complex Systems, Complex and Generative Systems, Manufacturing and Industry 4.0Digital Twins in Manufacturing Session Chair: Guodong Shao (National Institute of Standards and Technology) Characterizing Digital Factory Twins: Deriving Archetypes for Research and Industry Jonas Lick and Fiona Kattenstroth (Fraunhofer Institute for Mechatronic Systems Design IEM); Hendrik Van der Valk (TU Dortmund University); and Malte Trienens, Arno Kühn, and Roman Dumitrescu (Fraunhofer Institute for Mechatronic Systems Design IEM) Program Track: Manufacturing and Industry 4.0 Program Tag: Complex Systems Abstract AbstractThe concept of the digital twin has evolved to a key enabler of digital transformation in manufacturing. The adoption of digital twins for factories or digital factory twins remain fragmented and often unclear, particularly for small and medium-sized enterprises. This study addresses this ambiguity by systematically deriving archetypes of digital factory twins to support clearer classification, planning, and implementation. Based on a structured literature review and expert interviews, 71 relevant DFT use cases were identified. The result of the conducted cluster analysis is four distinct archetypes: (1) Basic Planning Factory Twin, (2) Advanced Simulation Factory Twin, (3) Integrated Operations Factory Twin, and (4) Holistic Digital Factory Twin. Each archetype is characterized by specific technical features, data integration levels, lifecycle phases, and stakeholder involvement. pdfDistributed Hierarchical Digital Twins: State-of-the-Art, Challenges and Potential Solutions Aatu Kunnari and Steffen Strassburger (Technische Universität Ilmenau) Program Track: Manufacturing and Industry 4.0 Abstract AbstractDigital Twins (DT) provide detailed, dynamic representations of production systems, but integrating multiple DTs into a distributed ecosystem presents fundamental challenges beyond mere model interoperability. DTs encapsulate dynamic behaviors, optimization goals, and time management constraints, making their coordination a complex, unsolved problem. Moreover, DT development faces broader challenges, including but not limited to data consistency, real-time synchronization, and cross-domain integration, that persist at both individual and distributed scales. This paper systematically reviews these challenges, examines how current research addresses them, and explores their implications in distributed, hierarchical DT environments. Finally, it presents preliminary ideas for a structured approach to orchestrating multiple DTs, laying the groundwork for future research on holistic DT management. pdfSynergic Use of Modelling & Simulation, Digital Twins and Large Language Models to Make Complex Systems Adaptive and Resilient Souvik Barat, Dushyanthi Mulpuru, Himabindu Thogaru, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd) Program Track: Complex and Generative Systems Abstract AbstractModeling and Simulation (M&S) has long been essential for decision-making in complex systems due to its ability to explore strategic and operational alternatives in a structured and risk-free manner. The emergence of Digital Twins (DTs) has further enhanced this by enabling real-time bidirectional synchronization with physical systems. However, constructing and maintaining accurate and adaptive models and DTs remains time- and resource-intensive and requires deep domain expertise. In this paper, we introduce an adaptive Decision-Making Framework (DMF) that integrates Large Language Models (LLMs) and Model-Driven Engineering (MDE) into the M&S and DT pipeline. By leveraging LLMs as proxy experts and synthesis aids, and combining them with MDE to improve reliability, our framework reduces manual effort in model construction, validation and decision space exploration. We present our approach and discuss how it improves agility, reduces expert dependency and can act as a pragmatic aid for making enterprise robust, resilient and adaptive. pdf AnyLogic, Complex Systems, Distributed, FlexSim, Manufacturing and Industry 4.0Simulation-Driven Production Scheduling Session Chair: Quang-Vinh Dang (Eindhoven University of Technology) From Scenario Farming to Learning: A Modular Low-Code Framework for Decision Support in Scheduling Madlene Leißau and Christoph Laroque (University of Applied Sciences Zwickau) Program Track: Manufacturing and Industry 4.0 Abstract AbstractModern manufacturing environments, such as semiconductor manufacturing, require agile, data-driven decision support to cope with increasing system complexity. Discrete event simulation (DES) is a key method for evaluating scheduling strategies under uncertainty. Building on a previously introduced low-code framework for scenario farming, this paper presents an extended and evolving approach that integrates machine learning (ML) to enhance scheduling decision support. The framework automates model generation, distributed experimentation, and systematic scenario data collection, providing the basis for training data-driven decision models. Using the Semiconductor Manufacturing Testbed 2020 as a reference, initial experiments demonstrate how simulation-based insights can be transformed into intelligent scheduling aids. Key features include model structure synchronization across simulation tools, automated experiment design, and modular integration of learning components, providing a first step towards adaptive, simulation-driven decision support systems. pdfA Heuristic-based Rolling Horizon Method for Dynamic and Stochastic Unrelated Parallel Machine Scheduling Shufang Xie, Tao Zhang, and Oliver Rose (Universität der Bundeswehr München) Program Track: Manufacturing and Industry 4.0 Program Tags: AnyLogic, Distributed Abstract AbstractIn stochastic manufacturing environments, disruptions such as machine breakdowns, variable processing times, and unexpected delays make static scheduling approaches ineffective. To address this, we propose a heuristic-based rolling horizon scheduling method for unrelated parallel machines. The rolling horizon framework addresses system stochasticity by enabling dynamic adaptation through frequent rescheduling of both existing jobs and those arriving within a rolling lookahead window. This method decomposes the global scheduling problem into smaller, more manageable subproblems. Each subproblem is solved using a heuristic approach based on a suitability score that incorporates key factors such as job properties, machine characteristics, and job-machine interactions. Simulation-based experiments show that the proposed method outperforms traditional dispatching rules in dynamic and stochastic manufacturing environments with a fixed number of jobs, achieving shorter makespans and cycle times, reduced WIP levels, and lower machine utilization. pdfSimulation-based Dynamic Job Shop Scheduling Approach to Minimize the Impact of Resource Uncertainties Md Abubakar Siddique, Selim Molla, Amit Joe Lopes, and Md Fashiar Rahman (The University of Texas at El Paso) Program Track: Manufacturing and Industry 4.0 Program Tags: Complex Systems, FlexSim Abstract AbstractThe complexity of job shops is characterized by variable product routing, machine reliability, and operator learning that necessitates intelligent assignment strategies to optimize performance. Traditional models often rely on first-available machine selection, neglecting learning curves and processing time variability. To overcome these limitations, this paper introduces the Data-Driven Job Shop Scheduling (DDJSS) framework, which dynamically selects machines based on the status of resources at the current time steps. To evaluate the effectiveness of the proposed frameworks, we developed two scenarios using FlexSim to perform a thorough analysis. The results demonstrated significant improvements in key performance indicators, including reduced waiting time, lower queue length, and higher throughput. The output is increased by over 144% and 348%, for some exemplary jobs in the case studies mentioned in this paper. This study highlights the value of integrating learning behavior and data-driven assignments for improving decision-making in flexible job shop environments. pdf Manufacturing and Industry 4.0Energy Aware Production Planning and Control Session Chair: Madlene Leißau (University of Applied Sciences Zwickau) Energy-Efficient Parallel Batching of Jobs with Unequal and Undefined Job Sizes on Parallel Batch Processing Machines Lisa C. Günther (Fraunhofer Institute for Manufacturing Engineering and Automation IPA) Program Track: Manufacturing and Industry 4.0 Abstract AbstractThis paper investigates a parallel batching problem with incompatible families, unequal job sizes and non-identical machine capacities. The objective is to minimize the total energy costs. Motivated by a real-word autoclave molding process in composite material manufacturing additional factors need to be considered: machine eligibility conditions, machine availability constraints, machine dependent energy consumption, and the job size and machine capacity may not be available in absolute terms. Mathematical models are formulated for both defined and undefined job and machine sizes. The latter approach creates batches based on a set of best-known batches derived from past production data to meet the machine capacity constraint. Finally, a heuristic is presented. Computational experiments are conducted based on a real case study. Energy savings of over 20% can be achieved compared to the actual planning with sensible batch forming and machine allocation in a short amount of computing time. pdfIntegrating Energy Storage into Shopfloor Dispatching: A Threshold-Based Energy-Aware Control Approach Wolfgang Seiringer, Balwin Bokor, Klaus Altendorfer, and Roland Braune (University of Applied Sciences Upper Austria) Program Track: Manufacturing and Industry 4.0 Abstract AbstractRising energy price volatility and the shift toward renewables are driving the need for energy-aware production planning. This paper investigates the integration of energy storage systems into dynamic dispatching to balance production-logistics and energy costs. Building on prior work that introduced a workload- and price-based dispatching rule, we extend the model to include energy storage loading and unloading decisions. The rule prioritizes storage refill at low prices and lets machines resort to stored energy when grid prices rise and the workload is high. A simulation-based evaluation examines scenarios with different storage capacities whereby decision rule parameters are optimized. Computational results demonstrate that a reduction of operational costs is possible without deteriorating production logistics performance. By decoupling energy sourcing from real-time prices, manufacturers achieve both resilience and cost savings. This research contributes to sustainable manufacturing by offering a practical strategy for integrating energy storage into production planning under volatile energy conditions. pdf
MASM: Semiconductor Manufacturing Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Chia-Yen Lee (National Taiwan University), Lars Moench (University of Hagen) Python, MASM: Semiconductor ManufacturingSemiconductor Processes I Session Chair: Andreas Klemmt (Infineon Technologies Dresden GmbH) Towards Process Optimization by Leveraging Relationships between Electrical Wafer Sorting and Complete-line Statistical Process Control Data Dmitrii Fomin (IMT-Atlantique); Andres Torres (Siemens); Valeria Borodin (IMT-Atlantique); Anastasiia Doinychko (Siemens); David Lemoine (IMT-Atlantique); Agnès Roussy (Mines Saint-Etienne, CNRS, UMR 6158 LIMOS); and Daniele Pagano, Marco Stefano Scroppo, Gabriele Tochino, and Daniele Vinciguerra (STMicroelectronics) Program Track: MASM: Semiconductor Manufacturing Program Tag: Python Abstract AbstractIn semiconductor manufacturing, Statistical Process Control (SPC) ensures that products meet the Electrical Wafer Sort (EWS) tests performed at the end of the manufacturing flow. In this work, we model the EWS tests for several products using inline SPC data from the Front-End-Of-Line (FEOL) to the Back-End-Of-Line (BEOL). SPC data tend to be inherently sparse because measuring all wafers, lots, and products is both costly and can significantly impact the throughput. In contrast, EWS data is densely collected at the die level, offering high granularity. We propose to model the problem as a regression task to uncover interdependencies between SPC and EWS data at the lot level. By applying two learning strategies, mono- and multi-target, we demonstrate empirically that leveraging families of EWS tests enhances model performance. The performance and practical relevance of the approach are validated through numerical experiments on real-world industrial data. pdfCross-Process Defect Attribution using Potential Loss Analysis Tsuyoshi Ide (IBM T. J. Watson Research Center) and Kohei Miyaguchi (IBM) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractCross-process root-cause analysis of wafer defects is among the most critical yet challenging tasks in
semiconductor manufacturing due to the heterogeneity and combinatorial nature of processes along the
processing route. This paper presents a new framework for wafer defect root cause analysis, called Potential
Loss Analysis (PLA), as a significant enhancement of the previously proposed partial trajectory regression
approach. The PLA framework attributes observed high wafer defect densities to upstream processes by
comparing the best possible outcomes generated by partial processing trajectories. We show that the task
of identifying the best possible outcome can be reduced to solving a Bellman equation. Remarkably, the
proposed framework can simultaneously solve the prediction problem for defect density as well as the
attribution problem for defect scores. We demonstrate the effectiveness of the proposed framework using
real wafer history data. pdfSemi-supervised Contrastive Learning from Semiconductor Manufacturing to TFT-LCD Defect Map Chia-Yen Lee, I-Kai Lai, and Yen-Wen Chen (National Taiwan University) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractDefect pattern detection and classification present significant challenges in thin-film transistor liquid-crystal display (TFT-LCD) manufacturing. Traditional machine learning approaches struggle with data scarcity, insufficient labeling, and class imbalance. To address these limitations, this study proposes a two-stage defect classification framework for Test for Open/Short (TOS) maps, which measure electrical defects by using contrastive pre-training on semiconductor manufacturing wafer bin maps to enhance classification with limited TOS data, followed by a novel dual-branch open-world semi-supervised learning that robustly handles both class imbalance and novel pattern discovery. pdf Java, MASM: Semiconductor ManufacturingSemiconductor Simulation Session Chair: Giulia Pedrielli (Arizona State University) Simulation-based Optimization Approach for Solving Production Planning Problems Hajime Sekiya and Lars Moench (University of Hagen) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractProduction planning for wafer fabs often relies on linear programming. Exogenous lead time estimates or workload-dependent lead times by means of clearing functions are taken into account in common planning formulations. For realistic performance assessment purposes, the process uncertainty is captured by simulating the execution of the resulting release schedules, i.e. expected values for profit or costs are considered. In the present paper, we take a more direct approach using simulation-based optimization. The capacity constraints and the lead time representation are indirectly respected by executing release schedules in a simulation model of a large-scaled wafer fab. Variable neighborhood search (VNS) is used to compute release schedules. We show by designed experiments that the proposed approach is able to outperform the allocated clearing function (ACF) formulation for production planning under many experimental conditions. pdfSimulation of a Semiconductor Manufacturing Research and Development Cleanroom Baptiste Loriferne (CEA LETI, Mines Saint-Etienne); Gaëlle Berthoux (CEA LETI); Valeria Borodin (IMT Atlantique); Vincent Fischer (CEA LETI); and Agnès Roussy (Mines Saint-Etienne) Program Track: MASM: Semiconductor Manufacturing Program Tag: Java Abstract AbstractThis paper focuses on a Research and Development (R&D) semiconductor manufacturing system. By virtue of their vocation, R&D facilities tolerate much more variability in processes and outcomes than industrial-scale ones. In such environments, operating under conditions characterized by high uncertainty and occurrences of (un)knowns corresponds to normal operating conditions rather than abnormal ones. This paper characterizes the key entities and operational aspects of a semiconductor R&D cleanroom and introduces a discrete-event simulation model that captures these elements. The simulation model is grounded in empirical data and reflects real-life operations management practices observed in actual R&D cleanroom settings. Preliminary computational results based on real-life instances are presented, and future research directions are outlined to support resilient decision-making in environments where high levels of uncertainty are part of normal operating conditions. pdfFrom Disruption to Profitability a Simulation Study on High Runner Focused Flexibility in Enhancing Resilience in Dual Fab Systems Bahar Irem Sabir (Izmir University of Economics), Hans Ehm (Infineon Technologies AG), Kamil Erkan Kabak (Izmir University of Economics), and Abdelgafar Ismail Mohammed Hamed (Infineon Technologies AG) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThis study investigates which product segments benefit most from dual-fab flexibility in semiconductor manufacturing. Dual-fab flexibility refers to the strategic capability of producing the same product at multiple geographically dispersed fabrication facilities. The focus is on selective flexibility, defined as the targeted allocation of flexibility to specific product segments, such as high-runners (HR: high-volume, frequently ordered products) and low-runners (LR: low-volume, infrequently ordered products). A discrete-event simulation model evaluates four flexibility scenarios under demand uncertainty. Results indicate that prioritizing flexibility for HR products leads to approximately a 7% improvement in cumulative profit values after five years of simulation compared to baseline scenarios. Simulations under 80% forecast accuracy further validate the model’s practical relevance. The findings highlight the value of strategic, demand-driven flexibility in enhancing semiconductor supply chain resilience and provide a foundation for future research on incorporating stochastic demand, dynamic lead times, and advanced forecasting techniques. pdf Neural Networks, Python, MASM: Semiconductor ManufacturingSemiconductor Scheduling and Dispatching I Session Chair: Hyun-Jung Kim (KAIST) Health Index-based Risk-aware Dispatching for Overhead Hoist Transport Systems in Semiconductor Fab Sanguk Im (Korea Advanced Institute of Science and Technology) and Young Jae Jang (Korea Advanced Institute of Science and Technology, DAIM Research Corp) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractOverhead Hoist Transport (OHT) systems significantly influence semiconductor fab performance.
Traditional dispatching methods such as Nearest Job First (NJF) focus on minimizing travel distance
without considering the dynamic health of vehicles. As a result, unstable vehicles may be dispatched to critical areas, increasing the risk of system-wide disruptions. Previous health-aware approaches often rely on Mean Time To Failure (MTTF), which assumes that failure events are observable—a strong assumption given the uncertainty and latent nature of real-world. To address this limitation, we introduce a Reconstruction Error (RE)-based Health Index (HI) derived from anomaly detection models, enabling dynamic and risk-sensitive dispatching. We define a risk-aware dispatching score that incorporates both traffic exposure and health degradation, and simulate diverse HI profiles with varying predictive fidelity. Our experiments aim to evaluate how the quality of the HI affects dispatching performance and system robustness pdfGraph-Based Reinforcement Learning for Dynamic Photolithography Scheduling Sang-Hyun Cho, Sohyun Jeong, and Jimin Park (korea advanced institute of science and technology); Boyoon Choi and Paul Han (Samsung Display); and Hyun-Jung Kim (korea advanced institute of science and technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Python Abstract AbstractThis paper addresses the photolithography process scheduling problem, a critical bottleneck in both display and semiconductor production. In display manufacturing, as the number of deposited layers increases and reentrant operations become more frequent, the complexity of scheduling processes has significantly increased. Additionally, growing market demand for diverse product types underscores the critical need for efficient scheduling to enhance operational efficiency and meet due dates. To address these challenges, we propose a novel graph-based reinforcement learning framework that dynamically schedules photolithography operations in real time, explicitly considering mask locations, machine statuses, and associated transfer times. Through numerical experiments, we demonstrate that our method achieves consistent and robust performance across various scenarios, making it a practical solution for real-world manufacturing systems. pdfModeling and Solving Complex Job-Shop Scheduling Problems for Reliability Laboratories Jessica Hautz (KAI), Andreas Klemmt (Infineon Technologies Dresden GmbH), and Lars Moench (University of Hagen) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractWe consider job-shop scheduling problems with stress test machines. Several jobs can be processed at the same time on such a machine if the sum of their sizes does not exceed its capacity. Only jobs with operations of the same incompatible family can be processed at the same time on a machine. The machine can be interrupted to start a new or to unload a completed job. A conditioning time is required to reach again the temperature for the stress test. The machine is unavailable during the conditioning process. Operations that cannot be completed before a conditioning activity have to continue with processing after the machine is available again. The makespan is to be minimized. A constraint programming formulation and a variable neighborhood search scheme based on an appropriate disjunctive graph model are designed. Computational experiments based on randomly generated problem instances demonstrate that the algorithms perform well. pdf MASM: Semiconductor ManufacturingMASM Keynote Session Chair: John Fowler (Arizona State University) Beyond Digital Twins, using Content-Rich Agentic Models to Drive New Breakthroughs in Tomorrow’s Factories Devadas Pillai (Intel) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThe future of leading-edge manufacturing demands innovations that generate real benefits for the business at a pace and scale we have never seen before. This talk will discuss major forces underway shaping manufacturing’s future, transitioning from digital-twins to being increasingly driven by agentic and multi-model solutions that enable full-autonomy and augmented intelligence-based decision support systems in the factory. Their foundations require highly dependable, content-rich, and collaborative environments that can sense, perceive, reason, plan and execute complex production decisions safely around the clock. And as business drivers change, these capabilities must adapt and stay available and resilient, while being extended and scaled out quickly. The talk will outline how these tools are moving from being advisory in nature to the next frontier of execution by objectively characterizing complexity and disruptions autonomously and creating runways that unlock entirely new levels of agility and productivity that manufacturing demands. pdf MASM: Semiconductor ManufacturingPanel: Semiconductor Simulation: Past and Future Session Chair: John Fowler (Arizona State University) Simulation in Semiconductor Manufacturing Between 2000 and 2050 - Lessons Learned and Future Expectations John Fowler (Arizona State University), Young Jae Jang (KAIST), Adar Kalir (Intel Foundry Manufacturing), Peter Lendermann (D-SIMLAB Technologies), Lars Moench (University of Hagen), Oliver Rose (University of the Bundeswehr), Georg Seidel (Infineon Technologies Austria AG), and Claude Yugma (Ecole de Mines Saint-Etienne) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThis is a panel paper which discusses the use of discrete-event simulation to address problems in semiconductor manufacturing. We have gathered a group of expert semiconductor researchers and practitioners from around the world who (often successfully) applied discrete-event simulation to semiconductor-related problems in the past. The paper collects their answers to an initial set of questions. These serve to not only showcase the current state-of-the-art of discrete-event simulation (DES) in semiconductor manufacturing but also provide insights into where the field is heading and the impact it will have on our world by 2050 in the semiconductor manufacturing domain. pdf MASM: Semiconductor ManufacturingSemiconductor Processes II Session Chair: Raphael Herding (Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) Real-time Metrology Capacity Optimization in Semiconductor Manufacturing Mathis Martin, Stéphane Dauzère-Pérès, and Claude Yugma (Ecole des Mines de Saint-Etienne) and Aymen Mili and Renaud Roussel (STMicroelectronics) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThis paper addresses a capacity management problem in semiconductor metrology, where lots are typically sent to measurement under static sampling rules. Such rigid strategies often cause bottlenecks when unexpected events occur, delaying production. To address this, we propose a corrective approach that dynamically selects lots to skip, i.e., not measured, while prioritizing the most critical ones and ensuring that metrology tools respect capacity limits. The method combines a skipping algorithm with the Iterated Min-Max (IMM) workload balancing procedure, which ensures a fair workload distribution and helps identify the most critical tools. Several performance indicators are introduced to evaluate the efficiency of this approach compared to a classical balancing strategy. Computational experiments with industrial data demonstrate that integrating IMM improves lot selection, reduces the number of skipped lots, and preserves measurement for the highest priority ones while better satisfying capacity constraints. pdfThreshold Voltage Distribution Narrowing by Machine Learning for Trench Mosfets Marc Fehlhaber (Technical University of Munich) and Dr. Ludger Borucki, Hans Ehm, Abdelgafar Ismail, and Anar Akhmedov (Infineon Technologies AG) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractA narrow threshold voltage distribution is essential to ensure uniform performance among parallel-connected power MOSFETs. This work analyzes threshold voltage variation in trench MOSFETs using real data and proposes the use of machine learning models to improve manufacturing uniformity. pdfVirtual Semiconductor Fabrication: Impact of Within-Wafer Variations on Yield and Performance Sumant Sarkar, Brett Lowe, and Benjamin Vincent (Lam Research) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThe semiconductor industry aims to enhance device performance while maintaining a high yield. Managing variations in process steps is a challenge, traditionally addressed through physical silicon experimentation. This study developed a virtual fabrication model of a FinFET transistor to analyze the impact of process variations, such as etch and deposition steps, on key transistor parameters like threshold voltage (Vth), Drain-Induced Barrier Lowering (DIBL), and subthreshold swing (SS). Analysis of within-wafer variations in spacer nitride thickness revealed significant variability, affecting Vth and yielding 66.7%. Adjusting the nominal spacer nitride deposition thickness and controlling deviations improved yield, confirmed by Monte Carlo simulations. Three process recipes with different deposition thickness distributions were tested, with the "donut" region recipe achieving the highest yield (94.8%) and smallest Vth standard deviation. Additionally, combined variations in spacer nitride deposition and epitaxial SiC growth thickness were analyzed, demonstrating the model's capability to predict and optimize multiple process recipes. pdf Python, Validation, MASM: Semiconductor ManufacturingSemiconductor Supply Chains Session Chair: Adar Kalir (Intel Israel, Ben-Gurion University) Modular Python Library for Simulations of Semiconductor Assembly and Test Process Equipment Robert Dodge (Arizona State University), Zachary Eyde (Intel Corporation), and Giulia Pedrielli (Arizona State University) Program Track: MASM: Semiconductor Manufacturing Program Tag: Python Abstract AbstractIncreasing global demand has led to calls for better methods of process improvement for semiconductor wafer manufacturing. Of these methods, digital twins have emerged as a natural extension of already existing simulation techniques. We argue that despite their extensive use in literature, the current tools used to construct semiconductor simulations are underdeveloped. Without a standardized tool to build these simulations, their modularity and capacity for growth are heavily limited. In this paper, we propose and implement a library of classes in the Python language designed to build on top of the already existing SimPy library. These classes are designed to automatically handle specific common logical features of semiconductor burn-in processes. This design allows users to easily create modular, adaptable, digital twin-ready simulations. Preliminary results demonstrate the library’s efficacy in predicting against benchmark data provided by the Intel Corporation and encourage further development. pdfSimulating Front-End Semiconductor Supply Chains to assess Master Plans under Uncertainty: a Case Study Aaron Joël Sieders and Cas Rosman (NXP Semiconductors N.V.), Collin Drent (Eindhoven University of Technology), and Alp Akcay (Northeastern University) Program Track: MASM: Semiconductor Manufacturing Program Tag: Validation Abstract AbstractThis research presents an aggregated simulation model for the front-end semiconductor supply chain to assess master plans, focusing on the impact of demand and supply uncertainties on the key performance indicators on-time delivery and inventory on hand. Supply uncertainty is modeled using discrete distributions of historical cycle times, incorporating load-dependent cycle times through a non-linear regression model. To model demand uncertainty, we use future forecasts and adjust them by sampling from distributions of historical forecast percentage errors. By comparing master plan performance under uncertain conditions with those from deterministic scenarios, the model provides valuable insights into how these uncertainties influence supply chain performance. Using data from NXP Semiconductors N.V., a Dutch semiconductor
manufacturing and design company, we demonstrate the model’s applicability and offer practical guidance for industry practitioners. Based on numerical experiments, we conclude that the impact of demand and supply uncertainty significantly differs compared to deterministic planning. pdfA Foundational Framework for Generative Simulation Models: Pathway to Generative Digital Twins for Supply Chain Surdeep Chotaliya, John Fowler, and Giulia Pedrielli (Arizona State University); David Bayba, Mikayla Norton, and Priyanka Sain (Intel Corporation); and Kaixu Yu (Arizona State University) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractDesigning high-fidelity simulation models from natural language descriptions remains a significant challenge, particularly in complex domains like supply chains. Pre-trained large language models (LLMs), when used without domain-specific adaptation, often fail to translate unstructured inputs into executable model representations. In this work, we present a fine-tuned LLM-based pipeline specifically tailored for simulation model generation. We construct a high-quality dataset using a novel data generation process that captures diverse supply chain scenarios. The fine-tuned LLM first converts human-like natural language scenario descriptions into structured representations, then translates these into executable code for a modular Python-based simulation engine built to support a wide range of supply chain configurations. Quantitative and qualitative evaluations show that the fine-tuned model consistently generates high-fidelity simulation models, significantly outperforming pre-trained LLMs in terms of structural accuracy, simulation behavior, and its ability to robustly extract relevant information from linguistically variable natural language descriptions. pdf Neural Networks, Supply Chain, MASM: Semiconductor ManufacturingSemiconductor Demand Management Session Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) An Empirical Study on the Assessment of Demand Forecasting Reliability for Fabless Semiconductor Companies In-Guk Choi and Seon-Young Hwang (korea advanced institute of science and technology); Jeongsun Ahn, Jehun Lee, and Sanghyun Joo (Korea Advanced Institute of Science and Technology); Kiung Kim, Haechan Lee, and Yoong Song (Samsumg Electronics); and Hyung-Jung Kim (Korea Advanced Institute of Science and Technology) Program Track: MASM: Semiconductor Manufacturing Program Tags: Neural Networks, Supply Chain Abstract AbstractFabless semiconductor companies—semiconductor design experts without their factories—serve as the essential bridge between sophisticated customer needs and technological innovations, playing a pivotal role in the semiconductor supply chain. At these companies, planning teams receive demand forecasts from the sales team and develop production plans that consider inventory, capacity, and lead time. However, due to the inherent characteristics of the semiconductor industry—high demand volatility, short product cycles, and extended lead times—a substantial gap often exists between sales forecasts and actual demand. Consequently, evaluating forecast reliability is critical for planning teams that rely solely on sales forecasts for production planning. In this paper, we propose a novel machine learning framework that assesses forecast reliability by classifying demand forecasts as either overestimates or underestimates rather than using regression methods. Experimental results confirm its effectiveness in assessing forecast reliability. pdfTrue Demand Framework for Demand and Inventory Transparency within Semiconductor Supply Chains Philipp Ulrich and Yannick Mester (Infineon Technologies AG), Bibi de Jong (NXP Semiconductors N.V.), and Marta Bonik and Hans Ehm (Infineon Technologies AG) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractAccurate demand forecasting is essential for the semiconductor industry to optimize production and meet customer needs. However, challenges like the COVID-19 pandemic, tariffs, and the Bullwhip Effect — the amplification of demand fluctuations along supply chains — create uncertainty and variability. While greater transparency and collaboration could help address these issues, reluctance to share sensitive data across supply chain tiers remains a significant barrier.
We present the True Demand Framework for semiconductor supply chains to address these challenges. This framework includes a monthly end-to-end supply chain survey that ensures privacy for all tiers through Multi-Party Computation. The surveys capture key data to reduce variability and connect with annual surveys, linking semiconductor technology nodes to end-market demand. Results are mapped to the semantic Digital Reference model, ensuring interoperability with other surveys and external market data. This enables advanced forecasting and simulation models, improving demand and inventory planning across semiconductor supply chains. pdfA Hybrid Approach for Short-term Demand Forecasting: A Computational Study Raphael Herding (1Forschungsinstitut für Telekommunikation and Kooperation (FTK), Westfälische Hochschule) and Lars Moench (1Forschungsinstitut für Telekommunikation and Kooperation (FTK), University of Hagen) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractWe consider a short-term demand forecasting problem for semiconductor supply chains. In addition to observed demand quantities, order entry information is available. We compute a combinational forecast based on an exponential smoothing technique, a long short-term memory (LSTM) network, and the order entry information. The weights for the different forecast sources and parameters for exponential smoothing are computed using a genetic algorithm. Computational experiments based on a rich data set from a semiconductor manufacturer are conducted. The results demonstrate that the best forecast performance is obtained if all the different forecasts are combined. pdf DEVS, MOZART LSE, MASM: Semiconductor ManufacturingSemiconductor Scheduling and Dispatching II Session Chair: Oliver Rose (University of the Bundeswehr Munich) Integrated RTS-RTD Simulation Framework for Semiconductor Manufacturing System Seongho Cho (Ajou University), Donguk Kim (LG Production and Research Institute), and Sangchul Park (Ajou University) Program Track: MASM: Semiconductor Manufacturing Program Tags: DEVS, MOZART LSE Abstract AbstractThe complexity of modern semiconductor fabrication (FAB) systems makes it difficult to implement integrated simulation systems that combine production and logistics simulators. As a result, these simulators have traditionally been developed independently. However, in actual FAB operations, information exchange between Real-Time Schedulers (RTS) and Real-Time Dispatchers (RTD) coordinates production activities. To address this issue, we propose a coupled RTS–RTD simulation framework that integrates production and logistics simulators into a unified environment. In addition, we introduce a dynamic decision-making rule that enables flexible responses when logistical constraints prevent execution of the original production schedule. Simulation experiments were conducted using the SMT2020 and SMAT2022 datasets. The results show that selectively following RTD decisions, instead of strictly adhering to RTS-generated schedules, can significantly improve production efficiency in FAB operations. pdfSolving Flexible Flow-shop Scheduling Problems with Maximal Time Lags: A Computational Study Daniel Schorn and Lars Moench (University of Hagen) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractWe consider a scheduling problem for a two-stage flexible flow shop with maximal time lags between consecutive operations motivated by semiconductor manufacturing. The jobs have unequal ready times and both initial and inter-stage time lags. The total weighted tardiness is the performance measure of interest. A heuristic scheduling framework using genetic programming (GP) to automatically discover priority indices is applied to the scheduling problem at hand. Computational experiments are carried out on randomly generated problem instances. The results are compared with the ones of a reference heuristic based on a biased random-key genetic algorithm combined with a backtracking procedure and a mixed-integer linear programming-based decomposition approach. The results show that high-quality schedules are obtained in a short amount of computing time using the GP approach. pdfProactive Hot Lot Routing for OHT Systems in Semiconductor Fab Sung Woo Park and Sungwook Jang (KAIST) and Young Jae Jang (KAIST, Daim Research) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractSemiconductor fabs must expedite urgent wafer carriers, known as Hot Lots, yet excessive priority can impair throughput. DAIM Research Corp. categorizes priority into three levels; we address Level 2, where the Hot Lot OHTs wins all local contests for path selection and merging while other moves proceed. We present a two-layer control framework. First, extending the reinforcement-learning-based dynamic routing algorithm, temporarily modify the expected travel time of tracks ahead the Hot Lot, causing regular OHTs to detour proactively. Second, modifying the entrance sequence at the merge of two tracks to reduce Hot Lot stops and waits. The effectiveness of the framework is demonstrated through simulation-based experiments, which show that Hot Lot travel time is shortened with acceptable impact on overall throughput. pdf Supply Chain, MASM: Semiconductor ManufacturingSemiconductor Factory Performance Session Chair: Claude Yugma (École Nationale Supérieure des Mines de Saint-Étienne) Predictive Modeling in Semiconductor Manufacturing: a Systematic Review of Cycle Time Prediction, Anomaly Detection, and Digital Twin Integration Claude Yugma, Adrien Wartelle, and Stéphane Dauzère-Pérès (Mines Saint-Étienne, Univ Clermont Auvergne); Pascal Robert and Renaud Roussel (STMicroelectronics); and Jasper van Heugten and Taki-Eddine Korabi (minds.ai) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractAccurate cycle time prediction is critical for optimizing throughput, managing WIP, and ensuring responsiveness in semiconductor manufacturing. This systematic review synthesizes literature from Google Scholar, Web of Science, and IEEE Xplore, covering analytical, statistical, AI-driven, and hybrid approaches. The key contributions are: (1) a structured, comparative evaluation of predictive techniques in terms of accuracy, interpretability, and scalability, and (2) identification of research gaps and emerging directions, such as self-adaptive models, generative AI for data augmentation, and enhanced human-AI collaboration. This review provides insights to support the development of robust forecasting systems aligned with the evolving demands of semiconductor manufacturing. pdfApplying Operating Curve Principles to Non-Manufacturing Processes to gain Efficiency and Effectiveness in Global Semiconductor Supply Chains Niklas Lauter and Hans Ehm (Infineon Technologies AG) and Gerald Reiner (WU Wien) Program Track: MASM: Semiconductor Manufacturing Program Tag: Supply Chain Abstract AbstractIn today's volatile and complex world, non-manufacturing processes are crucial for global agile and, thus, resilient semiconductor supply chains. Examples of non-manufacturing processes include new product samples, fab overarching process flows, and enabling processes. This conceptual paper begins with the hypothesis that operating curve principles from manufacturing can also be applied to non-manufacturing processes. To achieve this, we needed to understand why the principles of operating curves and flow factors lead to efficiency and effectiveness in semiconductor manufacturing. Our goal was to determine how these principles can be applied to broader contexts and specific use cases. The initial experimental results are promising. Furthermore, it is essential to assess how non-manufacturing processes are structured. The paper concludes with examples demonstrating the potential to bridge the efficiency gap between manufacturing and non-manufacturing processes. The ultimate goal is to match both efficiency levels, thus opening new opportunities within global semiconductor supply chains. pdfDynamic Risk Control in Serial Production Systems Under Queue Time Constraints YenTzu Huang and ChengHung Wu (National Taiwan University) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThis research introduces a dynamic risk control framework for a two-stage production system with downstream queue time constraints. Queue time constraints are critical in semiconductor manufacturing, as their violations lead to quality degradation and production losses. To reduce the risk of violating queue time constraints, a key principle in queue time constraint management is controlling upstream production when downstream queue times exceed critical thresholds. Traditional control methods face challenges in accurately responding to dynamic manufacturing conditions. Our approach develops a dynamic admission control method that predicts queue times by estimating the real-time downstream processing capacity and the number of preceding jobs. We implement a two-stage system combining upstream admission control with downstream priority dispatching. Through simulation experiments in multi-machine, multi-product systems, we evaluate performance across various utilization rates and capacity configurations. Results show that our approach reduces total costs, including scrap and inventory holding costs, while maintaining optimal throughput. pdf Conceptual Modeling, Python, Supply Chain, MASM: Semiconductor ManufacturingSemiconductor Potpourri Session Chair: Lars Moench (University of Hagen) Classical and AI-based Explainability of Ontologies on the Example of the Digital Reference – the Semantic Web for Semiconductor and Supply Chains Containing Semiconductors Marta Bonik (Infineon Technologies AG), Eleni Tsaousi (Harokopio University of Athens), Hans Ehm (Infineon Technologies AG), and George Dimitrakopoulos (Harokopio University of Athens) Program Track: MASM: Semiconductor Manufacturing Program Tags: Conceptual Modeling, Python, Supply Chain Abstract AbstractOntologies are essential for structuring knowledge in complex domains like semiconductor supply chains but often remain inaccessible to non-technical users. This paper introduces a combined classical and AI-based approach to improve ontology explainability, using Digital Reference (DR) as a case study. The first approach leverages classical ontology visualization tools, enabling interactive access and feedback for user engagement. The second integrates Neo4j graph databases and Python with a large language model (LLM)-based architecture, facilitating natural language querying of ontologies. A post-processing layer ensures reliable and accurate responses through query syntax validation, ontology schema verification, fallback templates, and entity filtering. The approach is evaluated with natural language queries, demonstrating enhanced usability, robustness, and adaptability. By bridging the gap between traditional query methods and AI-driven interfaces, this work promotes the broader adoption of ontology-driven systems in the Semantic Web and industrial applications, including semiconductor supply chains. pdfSensitivity Analysis of the SMT2020 Testbed for Risk-Based Maintenance in Semiconductor Manufacturing Philipp Andelfinger (Nanyang Technological University); Shuaibing Lu (Beijing University of Technology); Chew Wye Chan, Fei Fei Zhang, and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd); and Jie Zhang and Wentong Cai (Nanyang Technological University) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractA key concern for factory operators is the prioritization of machine groups for preventive maintenance to reduce downtimes of the most critical machine groups. Typically, such maintenance decisions are made based on heuristic rules supported by discrete-event simulations. Here, we present a large-scale sensitivity analysis that elucidates the relationship between machine group features and factory-level performance indicators in the presence of unplanned tool downtimes in SMT2020. From extensive simulation data, we extract predictors that permit a machine group ranking according to their criticality. An exhaustive combinatorial evaluation of 11 machine group features allows us to identify the most predictive features at different subset sizes. By evaluating and visualizing predictions made using linear regression models and neural networks against the ground truth data, we observe that to accurately rank machine groups, capturing nonlinearities is vastly more important than the size of the feature set. pdfProductization of Reinforcement Learning in Semiconductor Manufacturing Harel Yedidsion, David Norman, Prafulla Dawadi, Derek Adams, Luke Krebs, and Emrah Zarifoglu (Applied Materials) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractThe semiconductor industry faces complex challenges in scheduling, demanding efficient solutions. Reinforcement Learning (RL) holds significant promise; however, transitioning it from research to a productized solution requires overcoming key challenges such as reliability, tunability, scalability, delayed rewards, ease of use for operators, and connectivity with legacy systems. This paper explores these challenges and proposes strategies for effective productization of RL in wafer scheduling as developed by Applied Materials’ AI/ML team in the Applied SmartFactory® offering. In the context of this paper, “productization” refers specifically to the engineering and system-level integration needed to make RL solutions operational in fabs. pdf Supply Chain, MASM: Semiconductor ManufacturingSemiconductor Planning Session Chair: Lars Moench (University of Hagen) Improving Plan Stability in Semiconductor Manufacturing through Stochastic Optimization: a Case Study Eric Thijs Weijers and Nino Sluijter (NXP Semiconductors N.V., Eindhoven University of Technology); Gijs Hogers (NXP Semiconductors N.V., Tilburg University); Kai Schelthoff (NXP Semiconductors N.V.); and Ivo Adan and Willem van Jaarsveld (Eindhoven University of Technology) Program Track: MASM: Semiconductor Manufacturing Program Tag: Supply Chain Abstract AbstractIn this study, we propose a two-stage stochastic programming method to improve plan stability in semiconductor supply chain master planning in a rolling horizon setting. The two-stage programming model is applied to real-world data from NXP Semiconductors N.V. to assess the quality of generated plans based on the KPIs plan stability, on-time delivery, and inventory position. We also compare the performance of two-stage stochastic programming to linear programming. To model demand uncertainty, we propose to fit distributions to historical demand data from which stochastic demand can be sampled. For modeling supply, we propose an aggregated rolling horizon simulation model of the front-end supply chain. Based on the performed experiments, we conclude that two-stage programming outperforms LP in terms of plan stability, while performing comparably in terms of inventory position and on-time delivery. pdfAvailable-to-Promise vs. Order-Up-To: A Discrete-Event Simulation of Semiconductor Inventory Trade-Offs Ayhan Gökcem Cakir (Infineon Technologies AG, Technical University of Munich) and Hans Ehm (Infineon Technologies AG) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractComplex supply networks, volatile demand, and up to 26-week lead times pose severe trade-offs between inventory efficiency and service flexibility in semiconductor supply chains. We develop a discrete-event simulation to compare Order-up-to policies against an Available-to-promise (ATP) approach. Results show that ATP dynamically reallocates capacity, cuts reliance on finished-goods buffers, and boosts operational agility without eroding financial performance. Our findings suggest integrating ATP with real-time decision support can reconcile efficiency and flexibility under extreme lead-time uncertainty by enabling proactive order adjustments, decreasing backorders, and cost overruns. pdfPlan Stability as a Key Performance Indicator in Semiconductor Wafer Planning: a Commercial Case Study Kai Schelthoff, Luc Moors, and Ricky Andriansyah (NXP Semiconductors) Program Track: MASM: Semiconductor Manufacturing Abstract AbstractIn semiconductor wafer planning, long lead times and volatile demand signals pose significant challenges. Reacting to every demand change introduces nervousness and inefficiencies, particularly in backend operations. This case study presents NXP's Wafer Workbench, a decision-support tool that enables planners to assess demand changes through simulation under various capacity scenarios. By comparing new demand signals with existing wafer plans using KPIs such as Expected Service Level and Capacity Utilization, the tool enhances visibility and coordination. The Wafer Workbench has significantly reduced planning workload and infeasibilities while improving plan quality. Future enhancements include flexible scenario simulation, stochastic optimization to address demand uncertainty, and generative AI agents for automated plan adjustments. pdf
Military and National Security Applications Track Coordinator - Military and National Security Applications: Lance Champagne (Air Force Institute of Technology), James Grymes (United States Military Academy) Complex Systems, Data Analytics, Emergent Behavior, Netlogo, Python, Rare Events, Military and National Security ApplicationsModeling Human and Military Movement Dynamics Session Chair: Susan Aros (Naval Postgraduate School) Extending Social Force Model for the Design and Development of Crowd Control and Evacuation Strategies using Hybrid Simulation Best Contributed Applied Paper - Finalist Aaron LeGrand and Seunghan Lee (Mississippi State University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Emergent Behavior, Rare Events Abstract AbstractEfficient crowd control in public spaces is critical for mitigating threats and ensuring public safety, especially in scenarios where live testing environments are limited. It is important to study crowd behavior following disruptions and strategically allocate law enforcement resources to minimize the impact on civilian populations to improve security systems and public safety. This paper proposes an extended social force model to simulate crowd evacuation behaviors in response to security threats, incorporating the influence and coordination of law enforcement personnel. This research examines evacuation strategies that balance public safety and operational efficiency by extending social force models to account for dynamic law enforcement interventions. The proposed model is validated through physics-based simulations, offering insights into effective and scalable solutions for crowd control at public events. The proposed hybrid simulation model explores the utility of integrating agent-based and physics-based approaches to enhance community resilience through improved planning and resource allocation. pdfProbabilistic Isochrone Analysis in Military Ground Movement: Multi-Method Synergy for Adaptive Models of the Future Alexander Roman and Oliver Rose (Universität der Bundeswehr München) Program Track: Military and National Security Applications Program Tags: Data Analytics, Python Abstract AbstractTimely and accurate prediction of adversarial unit movements is a critical capability in military operations, yet traditional methods often lack the granularity or adaptability to deal with sparse, uncertain data. This paper presents a probabilistic isochrone (PI) framework to estimate future positions of military units based on sparse reconnaissance reports. The approach constructs a continuous probability density function of movement distances and derives gradient prediction areas. Validation is conducted using real-world data from the 2022 Russian invasion of Ukraine, evaluating both the inclusion of actual future positions within the predicted rings and the root mean-squared error of our method. Results show that the method yields reliable spatial uncertainty bounds and offers interpretable predictive insights. This PI approach complements existing isochrone mapping and adversarial modeling systems and demonstrates a novel fusion of simulation, spatial analytics, and uncertainty quantification in military decision support. Future work will integrate simulation to enhance predictive fidelity. pdfModeling Pedestrian Movement in a Crowd Context with Urgency Preemption Susan K. Aros (Naval Postgraduate School) and Dale Frakes (Portland State University) Program Track: Military and National Security Applications Program Tag: Netlogo Abstract AbstractRealistic crowd modeling is essential for military and security simulation models. In this paper we address modeling of the movement of people in the types of unstructured crowds that are common in civil security situations. Early approaches in the literature to simulating the movement of individuals in a crowd, typically treated the crowd as consisting of entities moving on a fixed grid, or as particles in a fluid flow, where the movement rules were relatively simple and each member had the same goal, such as to move along a crowded sidewalk or to evacuate through an exit. This paper proposes a 2-part approach for more complex pedestrian movement modeling that takes into account the cognitively-determined behavioral intent of each member of the crowd to determine their own movement objective while also allowing each to temporarily react to a short-term urgent situation that may arise while pursuing their movement goal. pdf Java, Military and National Security ApplicationsMedical Evacuation and Force Readiness Simulations Session Chair: Mehdi Benhassine (Royal Military Academy) Assessing the NATO Clinical Timelines in Medical Evacuation: A Simulation with Open-Access Data Kai Meisner (Bundeswehr Medical Academy, University of the Bundeswehr Munich); Falk Stefan Pappert and Tobias Uhlig (University of the Bundeswehr Munich); Mehdi Benhassine (Royal Military Academy); and Oliver Rose (University of the Bundeswehr Munich) Program Track: Military and National Security Applications Program Tag: Java Abstract AbstractNATO allies are preparing for Large-Scale Combat Operations (LSCOs) against peer or near-peer adversaries. Although a significant increase in casualties with life-threatening injuries is expected, western military personnel lack experience with the medical requirements of LSCOs. We propose the use of simulation to conduct necessary research, estimate the resources required, and adapt the doctrine. We therefore present a scenario for assessing NATO’s clinical timelines based on open-access data, showing that a shortage of surgical capacity is likely to occur. pdfEvaluating Workforce Transitions in the Royal Canadian Air Force Robert Mark Bryce (Defence Research and Development Canada); Slawomir Wesolkowski (Government of Canada); and Stephen Okazawa, Jillian Henderson, and Rene Seguin (Defence Research and Development Canada) Program Track: Military and National Security Applications Abstract AbstractAircraft fleet transitions are about more than obtaining a new platform. The transition of personnel, support, and operational responsibility makes transitions complex and fraught with risk. Here we discuss Defence Research and Development Canada (DRDC) Defence Scientists’ approach to evaluating Royal Canadian Air Force (RCAF) fleet transitions. We delve into the use of two different toolsets to model transitions. pdfDiscrete-Event Simulation of Contested Casualty Evacuation from the Frontlines in Ukraine Mehdi Benhassine (Royal Military Academy); Kai Meisner (Bundeswehr Medical Academy); John Quinn (Charles University); Marian Ivan (NATO Centre of Excellence for Military Medicine); Ruben De Rouck, Michel Debacker, and Ives Hubloue (Vrije Universiteit Brussel); and Filip Van Utterbeeck (Royal Military Academy) Program Track: Military and National Security Applications Abstract AbstractA scenario of casualty evacuations from the frontlines in Ukraine was simulated in SIMEDIS, incorporating persistent drone threats that restricted daytime evacuations. A stochastic discrete-event approach modeled casualty location and health progression. Casualties from a First-Person View drone explosion in a trench were simulated, incorporating controlled versus uncontrolled bleeding in rescue and stabilization efforts. Two evacuation strategies were compared: (A) transport to a nearby underground hospital with delays and (B) direct transport to a large hospital with potential targeting en route. Results showed that strategy A was safer for transport, but effective hemorrhage control was crucial for survival. Strategy A led to lower mortality than strategy B only when hemorrhage control was sufficient. Without it, both strategies resulted in similar mortality, emphasizing that blood loss was the primary cause of death in this simulation. pdf Complex Systems, Cybersecurity, Python, Military and National Security ApplicationsNon-Kinetic Simulation: Misinformation, Manipulation, and Cultural Erosion Session Chair: David Farr (University of Washington) Echo Warfare: The Strategic Appropriation of Intangible Heritage Sara Salem AlNabet (Multidimensional Warfare Training Center (MDIWTC).) Program Track: Military and National Security Applications Program Tag: Python Abstract AbstractThis paper introduces Echo Warfare, a novel non-kinetic doctrine that strategically targets intangible cultural heritage through systematic replication, appropriation, and institutional rebranding. Unlike traditional psychological or cognitive warfare, Echo Warfare aims for the intergenerational erosion of cultural identity and memory infrastructure. While current international legal frameworks inadequately address such systematic cultural appropriation, this paper advances discourse by implementing a Dynamic Bayesian Network (DBN) simulation to model how cultural anchors degrade under sustained echo feedback. The simulation reveals threshold effects, legitimacy dynamics, and population-level vulnerabilities, offering a predictive modeling tool for identifying intervention thresholds and patterns of vulnerability. The study concludes with legal, educational, and simulation-based policy recommendations to mitigate the long-term effects of Echo Warfare. pdfA Baseline Simulation of Hybrid Misinformation and Spearphishing Campaigns in Organizational Networks Jeongkeun Shin, Han Wang, L. Richard Carley, and Kathleen M. Carley (Carnegie Mellon University) Program Track: Military and National Security Applications Program Tags: Complex Systems, Cybersecurity Abstract AbstractThis study presents an agent-based simulation that examines how pre-attack misinformation amplifies the effectiveness of spearphishing campaigns within organizations. A virtual organization of 235 end user agents is modeled, each assigned unique human factors such as Big Five personality traits, fatigue, and job performance, derived from empirical data. Misinformation is disseminated through Facebook, where agents determine whether to believe and spread false content using regression models from prior psychological studies. When agents believe misinformation, their psychological and organizational states degrade to simulate a worst-case scenario. These changes increase susceptibility to phishing emails by impairing security-related decision-making. Informal relationship networks are constructed based on extraversion scores, and network density is varied to analyze its effect on misinformation spread. The results demonstrate that misinformation significantly amplifies organizational vulnerability by weakening individual and collective cybersecurity-relevant decision-making, emphasizing the critical need to account for human cognitive factors in future cybersecurity strategies. pdfSimulating Misinformation Vulnerabilities With Agent Personas David Thomas Farr (University of Washington), Lynnette Hui Xian Ng (Carnegie Mellon University), Stephen Prochaska (University of Washington), Iain Cruickshank (Carnegie Mellon University), and Jevin West (University of Washington) Program Track: Military and National Security Applications Abstract AbstractDisinformation campaigns can distort public perception and destabilize institutions. Understanding how different populations respond to information is crucial for designing effective interventions, yet real-world experimentation is impractical and ethically challenging. To address this, we develop an agent-based simulation using Large Language Models (LLMs) to model responses to misinformation. We construct agent personas spanning five professions and three mental schemas, and evaluate their reactions to news headlines. Our findings show that LLM-generated agents align closely with ground-truth labels and human predictions, supporting their use as proxies for studying information responses. We also find that mental schemas, more than professional background, influence how agents interpret misinformation. This work provides a validation of LLMs to be used as agents in an agent-based model of an information network for analyzing trust, polarization, and susceptibility to deceptive content in complex social systems. pdf Complex Systems, Data Driven, Neural Networks, Military and National Security ApplicationsSimulation for Combat Operations and Sustainment Session Chair: Michael Möbius (Airbus Defence and Space GmbH) Advancing Military Decision Support: Reinforcement Learning-Driven Simulation for Robust Operational Plan Validation Michael Möbius and Daniel Kallfass (Airbus Defence and Space) and Stefan Göricke and Thomas Manfred Doll (German Armed Forces (Bundeswehr)) Program Track: Military and National Security Applications Program Tag: Neural Networks Abstract AbstractThe growing complexity of modern warfare demands advanced AI-driven decision support for validating Operational Plans (OPLANs). This paper proposes a multi-agent reinforcement learning framework integrated into the ReLeGSim environment to rigorously test military strategies under dynamic conditions. The adoption of deep reinforcement learning enables agents to learn optimal behavior within operational plans, transforming them into “intelligent executors”. By observing these agents, one can identify vulnerabilities within plans. Key innovations include: (1) a hybrid approach combining action masking for strict OPLAN adherence with interleaved behavior cloning to embed military doctrine; (2) a sequential training approach where agents first learn baseline tactics before evaluating predefined plans; and (3) data farming techniques using heatmaps and key performance indicators to visualize strategic weaknesses. Experiments show hard action masking outperforms reward shaping for constraint enforcement. This work advances scalable, robust AI-driven OPLAN validation through effective domain knowledge integration. pdfWeapon Combat Effectiveness Analytics: Integrating Deep Learning and Big Data from Virtual-constructive Simulations Luis Rabelo, Larry Lowe, Won II Jung, Marwen Elkamel, and Gene Lee (University of Central Florida) Program Track: Military and National Security Applications Program Tags: Complex Systems, Data Driven Abstract AbstractThis paper explores the application of deep learning and big data analytics to assess Weapon Combat Effectiveness (WCE) in dynamic combat scenarios. Traditional WCE models rely on simplified assumptions and limited input variables, limiting their realism. To overcome these challenges, datasets are generated from integrated Virtual-Constructive (VC) simulation frameworks, combining strengths of defense modeling, big data, and artificial intelligence. A case study features two opposing forces: a Blue Force (seven F-16 aircraft) and a Red Force (two surface-to-air missile (SAM) units) and a high-value facility. Raw simulation data is processed to extract Measures of Performance (MOPs) to train a convolutional neural network (CNN), to capture nonlinear relationships and estimate mission success probabilities. Results show the model’s resilience to data noise and its usefulness in generating decision-support tools like probability maps. Early results suggest that deep learning integrated with federated VC simulations can significantly enhance fidelity and flexibility of WCE analytics. pdfModeling Supply Chain Resiliency for Assembly Fabrication Using Discrete Event Simulation Julianna Puccio (Pacific Northwest National Laboratory) Program Track: Military and National Security Applications Abstract AbstractSupply chain resiliency analysis has become a critical focus for organizations striving to navigate disruptions and uncertainties in a complex, global economy. For products requiring a multitude of components and suppliers, the magnitude of the impact of disruptions increases exponentially. Anticipating and addressing these potential disruptions necessitates advanced modeling approaches that provide actionable insights into how supply chain dynamics unfold under both normal and disrupted conditions.
Researchers at Pacific Northwest National Laboratory (PNNL) developed a supply chain resiliency model using Process Simulator, a discrete event simulation (DES) software, to model the fabrication of assemblies. This DES evaluates the performance of an assembly’s supply chain as a whole and its ability to keep pace with demand given the individual supply chains of each component used in the fabrication of the assemblies. pdf Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo, Python, Military and National Security ApplicationsCybersecurity Simulation and AI-Driven Defense Session Chair: James Grymes (United States Military Academy) A Formal and Deployable Gaming Operation to Defend IT/OT Networks Ranjan Pal, Lillian Bluestein, Tilek Askerbekov, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Military and National Security Applications Program Tags: Complex Systems, Conceptual Modeling, Cybersecurity, Monte Carlo Abstract AbstractThe cyber vulnerability terrain is largely amplified in critical infrastructure systems (CISs) that attract exploitative (nation-state) adversaries. This terrain is layered over an IT and IoT-driven operational technology (OT) network that supports CIS software applications and underlying protocol communications. Usually, the network is too large for both cyber adversaries and defenders to control every network resource under budget constraints. Hence, both sides strategically want to target 'crown jewels' (i.e., critical network resources) as points of control in the IT/OT network. Going against traditional CIS game theory literature that idealistically (impractically) model attacker-defense interactions, we are the first to formally model real-world adversary-defender strategic interactions in CIS networks as a simultaneous non-cooperative network game with an auction contest success function (CSF) to derive the optimal defender strategy at Nash equilibria. We compare theoretical game insights with those from large-scale Monte Carlo game simulations and propose CIS-managerial cyber defense action items. pdfExpert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection David Thomas Farr (University of Washington), Kevin Talty (United States Army), Alexandra Farr (Microsoft), John Stockdale (U.S. Army), Iain Cruickshank (Carnegie Mellon University), and Jevin West (University of Washington) Program Track: Military and National Security Applications Program Tags: Cybersecurity, Python Abstract AbstractAs cyber threats become more sophisticated, rapid and accurate vulnerability detection is essential for maintaining secure systems. This study explores the use of Large Language Models in software vulnerability assessment by simulating the identification of Python code with known Common Weakness Enumerations (CWEs), comparing zero-shot, few-shot cross-domain, and few-shot in-domain prompting strategies. Our results indicate that few-shot prompting significantly enhances classification performance, particularly when integrated with confidence-based routing strategies that improve efficiency by directing human experts to cases where model uncertainty is high.
We find that LLMs can effectively generalize across vulnerability categories with minimal examples, suggesting their potential as scalable, adaptable cybersecurity tools in simulated environments. By integrating AI-driven approaches with expert-in-the-loop (EITL) decision-making, this work highlights a pathway toward more efficient and responsive cybersecurity workflows. Our findings provide a foundation for deploying AI-assisted vulnerability detection systems that enhance resilience while reducing the burden on human analysts. pdf
Track Coordinator - Modeling Methodology: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET), Andrea D'Ambrogio (University of Roma TorVergata), Gabriel Wainer (Carleton University) C++, Complex Systems, DEVS, Rare Events, Modeling MethodologyDEVS Session Chair: Hessam Sarjoughian (Arizona State University) DEVS Models for Arctic Major Maritime Disasters Hazel Tura Griffith and Gabriel A. Wainer (Carleton University) Program Track: Modeling Methodology Program Tags: C++, DEVS, Rare Events Abstract AbstractModern modelling and simulation techniques allow us to safely test the policies used to mitigate disasters. We show how the DEVS formalism can be used to ease the modelling process by exploiting its modularity. We show how a policymaker’s existing models of any type can be recreated with DEVS so they may be reused in any new models, decreasing the number of new models that need to be made. We recreate a sequential decision model of an arctic major maritime disaster developed by the Canadian government as a DEVS model to demonstrate the method. The case study shows how DEVS allows policymakers to create models for studying emergency policies with greater ease. This work shows a method that can be used by policymakers, including models of emergency scenarios, and how they can benefit from creating equivalent DEVS models, as well as exploiting the beneficial properties of the DEVS formalism. pdfAsymmetric Cell-DEVS Wildfire Spread Using GIS Data Mark Murphy, Gabriel Wainer, Jaan Soulier, and Alec Tratnik (Carleton University) Program Track: Modeling Methodology Abstract AbstractWildfires are extremely dangerous and destructive. In order to protect populations and infrastructure,
government officials and firefighters must best decide how to expend their limited resources. Wildfire
simulation aids these decision makers by predicting the spread of fire. One method to simulate wildfires is
using Cellular Automata. In this paper we present a hybrid model combining a cellular wildfire spread
model using asymmetric Cell-DEVS models, the Behave library for calculating the rate spread of the fire
(based on difference equations), and a Geographical Information System. The GIS information from
publicly available maps, and the combination of techniques can improve the accuracy of the wildfire spread
predictions, and allows multiple scenarios to be simulated in timely manner. pdfGenerative Statecharts-Driven PDEVS Behavior Modeling Vamsi Krishna Vasa and Hessam S. Sarjoughian (Arizona State University) and Edward J. Yellig (Intel Corporation) Program Track: Modeling Methodology Program Tags: Complex Systems, DEVS Abstract AbstractBehavioral models of component-based dynamical systems are integral to building useful simulations. Toward this goal, approaches enabled by Large Language Models (LLMs) have been proposed and developed to generate grammar-based models for Discrete Event System Specification (DEVS). This paper introduces PDEVS-LLM, an agentic framework to assist in developing Parallel DEVS (PDEVS) models. It proposes using LLMs with statecharts to generate behaviors for parallel atomic models. Enabled with PDEVS concepts, plausible facts from the whole description of a system are extracted. The PDEVS-LLM is equipped with grammars for the PDEVS statecharts and hierarchical coupled model. LLM agents assist modelers in (re-)generating atomic models with conversation histories. Examples are developed to demonstrate the capabilities and limitations of LLMs for generative PDEVS models. pdf Complex Systems, Conceptual Modeling, Neural Networks, Python, Supply Chain, Validation, Modeling MethodologySimulation and AI Session Chair: Gabriel Wainer (Carleton University) Model Validation and LLM-based Model Enhancement for Analyzing Networked Anagram Experiments Hao He, Xueying Liu, and Xinwei Deng (Virginia Tech) Program Track: Modeling Methodology Program Tags: Neural Networks, Validation Abstract AbstractAgent-based simulations for networked anagram games, often taking advantage of the experimental data, are useful tools to investigate collaborative behaviors. To confidently incorporate the statistical analysis from the experimental data into the ABS, it is crucial to conduct sufficient validation for the underlying statistical models. In this work, we propose a systematic approach to evaluate the validity of statistical methods of players’ action sequence modeling for the networked anagram experiments. The proposed method can appropriately quantify the effect and validity of expert-defined covariates for modeling the players’ action sequence data. We further develop a Large Language Model (LLM)-guided method to augment the covariate set, employing iterative text summarization to overcome token limits. The performance of the proposed methods is evaluated under different metrics tailored for imbalanced data in networked anagram experiments. The results highlight the potential of LLM-driven feature discovery to refine the underlying statistical models used in agent-based simulations. pdfLLM Assisted Value Stream Mapping Micha Jan Aron Selak, Dirk Krechel, and Adrian Ulges (RheinMain University of Applied Sciences) and Sven Spieckermann, Niklas Stoehr, and Andreas Loehr (SimPlan AG) Program Track: Modeling Methodology Program Tags: Neural Networks, Supply Chain Abstract AbstractThe correct design of digital value stream models is an intricate task, which can be challenging especially for untrained or inexperienced users. We address the question whether large language models can be adapted to "understand" value stream’s structure and act as modeling assistants, which could support users with repairing errors and adding or configuring process steps in order to create valid value stream maps that can be simulated. Specifically, we propose a domain-specific multi-task training process, in which an instruction-tuned large language model is fine-tuned to yield specific information on its input value stream or to fix scripted modeling errors. The resulting model – which we coin Llama-VaStNet – can manipulate value stream structures given user requests in natural language. We demonstrate experimentally that Llama-VaStNet outperforms its domain-agnostic vanilla counterpart, i.e. it is 19% more likely to produce correct individual manipulations. pdfA Simulation-enabled Framework for Mission Engineering Problem Definition: Integrating Ai-driven Knowledge Retrieval with Human-centered Design Rafi Soule and Barry C. E (Old Dominion University) Program Track: Modeling Methodology Program Tags: Complex Systems, Conceptual Modeling, Python Abstract AbstractMission Engineering (ME) requires coordination of multiple systems and stakeholders, but often suffers from unclear problem definitions, fragmented knowledge, and limited engagement. This paper proposes a hybrid methodology integrating Retrieval-Augmented Generation (RAG), Human-Centered Design (HCD), and Participatory Design (PD) within a Model-Based Systems Engineering (MBSE) framework. The approach generates context-rich, stakeholder-aligned mission problem statements, as demonstrated in the Spectrum Lab case study, ultimately improving mission effectiveness and stakeholder collaboration. pdf AnyLogic, Complex Systems, Conceptual Modeling, Monte Carlo, System Dynamics, Variance Reduction, Modeling MethodologyModeling Paradigms Session Chair: Andrea D'Ambrogio (University of Roma TorVergata) A Novel System Dynamics Approach to DC Microgrid Power Flow Analysis Jose González de Durana (University of the Basque Country) and Luis Rabelo and Marwen Elkamel (University of Central Florida) Program Track: Modeling Methodology Program Tags: AnyLogic, Complex Systems, Conceptual Modeling, System Dynamics Abstract AbstractThis paper employs System Dynamics (SD) to model and analyze DC power distribution systems, focusing on methodological development and using microgrids as case studies. The approach follows a bottom-up methodology, starting with the fundamentals of DC systems and building toward more complex configurations. We coin this approach “Power Dynamics,” which uses stocks and flows to represent electrical components such as resistors, batteries, and power converters. SD offers a time-based, feedback-driven approach that captures component behaviors and system-wide interactions. This framework provides computational efficiency, adaptability, and visualization, enabling the integration of control logic and qualitative decision-making elements. Three case studies of microgrids powered by renewable energy demonstrate the framework’s effectiveness in simulating energy distribution, load balancing, and dynamic power flow. The results highlight SD’s potential as a valuable modeling tool for studying modern energy systems, supporting the design of flexible and resilient infrastructures. pdfFast Monte Carlo Irene Aldridge (Cornell University) Program Track: Modeling Methodology Program Tags: Monte Carlo, Variance Reduction Abstract AbstractThis paper proposes an eigenvalue-based small-sample approximation of the celebrated Markov Chain Monte Carlo that delivers an invariant steady-state distribution that is consistent with traditional Monte Carlo methods. The proposed eigenvalue-based methodology reduces the number of paths required for Monte Carlo from as many as 1,000,000 to as few as 10 (depending on the simulation time horizon T), and delivers comparable, distributionally robust results, as measured by the Wasserstein distance. The proposed methodology also produces a significant variance reduction in the steady-state distribution. pdfValidating Simulated Agents With Pharmaceutical Supply Chain Game Data souri sasanfar, OMID MOHADDESI, Noah Chicoine, JACQUELINE GRIFFIN, and CASPER HARTEVELD (Northeastern University) Program Track: Modeling Methodology Abstract AbstractHuman decision-making under uncertainty poses a significant challenge in modeling disrupted pharmaceutical supply chains. This study introduces a behaviorally grounded simulation framework systematically integrating empirical data from participatory game experiments using an advanced methodological pipeline. Specifically, we combine Principal Component Analysis (PCA) for dimensionality reduction, Longest Common Prefix (LCP) clustering for behavioral sequence segmentation, and Hidden Markov Models (HMMs) for dynamic behavioral archetype extraction. These integrated methods identify three archetypes: Hoarders, Reactors, and Followers, which we embed into agent-based simulations to assess impacts under varying disruption and information-sharing scenarios. Agent realism is rigorously validated using Kolmogorov-Smirnov tests that compare simulated and empirical behavioral data, demonstrating strong alignment in general but revealing notable discrepancies among Hoarders under disruption-duration conditions. The findings highlight the methodological necessity of capturing dynamic behavioral heterogeneity, guiding future research on alternative clustering methods, multiphase clustering, and dynamic behavioral transition modeling for enhanced supply chain resilience. pdf C++, Distributed, Parallel, Modeling MethodologyPDES Session Chair: Sebastiaan Meijer (KTH Royal Institute of Technology) DDA-PDES: A Data-Dependence Analysis Parallel Discrete-Event Simulation Framework for Event-Level Parallelization of General-Purpose DES Models Erik J. Jensen; James F. Leathrum, Jr.; Christopher J. Lynch; and Katherine Smith (Old Dominion University) and Ross Gore (Old Dominion University, Center for Secure and Intelligent Critical Systems) Program Track: Modeling Methodology Program Tags: C++, Parallel Abstract AbstractUtilizing data-dependence analysis (DDA) in parallel discrete-event simulation (PDES) to find event-level parallelism, we present the DDA-PDES framework as an alternative to spatial-decomposition (SD) PDES. DDA-PDES uses a pre-computed Independence Time Limit (ITL) table to efficiently identify events in the pending-event set that are ready for execution, in a shared-memory-parallel simulation engine. Experiments with AMD, Qualcomm, and Intel platforms using several packet-routing network models and a PHOLD benchmark model demonstrate speedup of up to 8.82x and parallel efficiency of up to 0.91. In contrast with DDA-PDES, experiments with similar network models in ROSS demonstrate that SD-PDES cannot speed up the packet-routing models without degradation to routing efficacy. Our results suggest DDA-PDES is an effective method for parallelizing discrete-event simulation models that are computationally intensive, and may be superior to traditional PDES methods for spatially-decomposed models with challenging communication requirements. pdfThe Needle in the One-Billion Event-Haystack: A Simple Method For Checking Reverse Handlers in Parallel Discrete Event Simulation Elkin Cruz-Camacho and Christopher D. Carothers (Rensselaer Polytechnic Institute) Program Track: Modeling Methodology Abstract AbstractParallel Discrete Event Simulation (PDES) enables model developers to run massive simulations across millions of computer nodes. Unfortunately, writing PDES models is hard, especially when implementing reverse handlers. Reverse handlers address the issue of saving the entire state of Logical Processes (LPs), a necessary step for optimistic simulation. They restore the state of LPs through reverse computation, which reduces memory and computation for most common models. This forces model designers to implement two tightly coupled functions: an event handler and a reverse handler. A small bug in a reverse handler can trigger a cascade of errors, ultimately leading to the simulation’s failure. We have implemented a strategy to check reverse handlers one event at a time. In a mature PDES model, we identified reverse handling bugs that would have required processing approximately 1 billion events to manifest, which is nearly impossible to detect using traditional debugging techniques. pdfOptimizing Event Timestamp Processing in Time Warp Gaurav Shinde (STERIS, Inc); Sounak Gupta (Oracle, Inc); and Philip A. Wilsey (University of Cincinnati) Program Track: Modeling Methodology Program Tags: Distributed, Parallel Abstract Abstractwarped2 is a general purpose discrete event simulation kernel that contains a robust event time comparison mechanism to support a broad range of modeling domains. The warped2 kernel can be configured for sequential, parallel, or distributed execution. The parallel or distributed versions implement the Time Warp mechanism (with its rollback and relaxed causality) such that a total order on events can be maintained. To maintain a total order, warped2 has an event ordering mechanism that contains up to 10 comparison operations. While not all comparisons require evaluation of all 10 relations, the overall cost of time comparisons in a warped2 simulation can still consume approximately 15-20% of the total runtime. This work examines the runtime costs of time comparisons in a parallel configuration of the warped2 simulation kernel. Optimizations to the time comparison mechanism are explored and the performance impacts of each are reported. pdf Conceptual Modeling, Java, Metamodeling, Ranking and Selection, Validation, Modeling MethodologySimulation Experiments Session Chair: Susan Aros (Naval Postgraduate School) Dialectic Models for Documenting and Conducting Simulation Studies: Exploring Feasibility Steffen Zschaler (King's College London), Pia Wilsdorf (University of Rostock), Thomas Godfrey (Aerogility Ltd), and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Metamodeling, Validation Abstract AbstractValidation and documentation of rationale are central to simulation studies. Most current approaches focus only on individual simulation artifacts---most typically simulation models---and their validity rather than their contribution to the overall simulation study. Approaches that aim to validate simulation studies as a whole either impose structured processes with the implicit assumption that this will ensure validity, or they rely on capturing provenance and rationale, most commonly in natural language, following accepted documentation guidelines. Inspired by dialectic approaches for developing mathematical proofs, we explore the feasibility of capturing validity and rationale information as a study unfolds through agent dialogs that also generate the overall simulation-study argument. We introduce a formal framework, an initial catalog of possible interactions, and a proof-of-concept tool to capture such information about a simulation study. We illustrate the ideas in the context of a cell biological simulation study. pdfGoal-oriented Generation of Simulation Experiments Anja Wolpers, Pia Wilsdorf, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock) Program Track: Modeling Methodology Program Tags: Conceptual Modeling, Java, Validation Abstract AbstractAutomatically generating and executing simulation experiments promises to make running simulation
studies more efficient, less error-prone, and easier to document and replicate. However, during experiment
generation, background knowledge is required regarding which experiments using which inputs and outputs
are useful to the modeler. Therefore, we conducted an interview study to identify what types of experiments
modelers perform during simulation studies. From the interview results, we defined four general goals
for simulation experiments: exploration, confirmation, answering the research question, and presentation.
Based on the goals, we outline and demonstrate an approach for automatically generating experiments by
utilizing an explicit and thoroughly detailed conceptual model. pdfEnhanced Upper Confidence Bound Procedure for large-scale Ranking and Selection Song Huang, Guangxin Jiang, and Chenxi Li (Harbin Institute of Technology) and Ying Zhong (University of Electronic Science and Technology of China) Program Track: Modeling Methodology Program Tag: Ranking and Selection Abstract AbstractWith the rapid advancement of computing technology, there has been growing interest in effectively solving large-scale ranking and selection (R&S) problems. In this paper, we propose a new large-scale fixed-budget R&S procedure, namely the enhanced upper confidence bound (EUCB) procedure. The EUCB procedure incorporates variance information into the dynamic allocation of simulation budgets. It selects the alternative with the largest upper confidence bound. We prove that the EUCB procedure has sample optimality; that is, to achieve an asymptotically nonzero probability of correct selection (PCS), the total sample size required grows at the linear order with respect to the number of alternatives. We demonstrate the effectiveness of the EUCB procedure in numerical examples. In addition to achieving sample optimality under the PCS criterion, our numerical experiments also show that the EUCB procedure maintains sample optimality under the expected opportunity cost (EOC) criterion. pdf DOE, Monte Carlo, Python, Modeling MethodologyDomain-Specific Techniques Session Chair: Elkin Cruz-Camacho (Rensselaer Polytechnic Institute) PySIRTEM: An Efficient Modular Simulation Platform For The Analysis Of Pandemic Scenarios Preetom Kumar Biswas, Giulia Pedrielli, and K. Selçuk Candan (Arizona State University) Program Track: Modeling Methodology Program Tags: DOE, Monte Carlo, Python Abstract AbstractConventional population-based ODE models struggle against increased level of resolution since incorporating many states exponentially increases computational costs, and demands robust calibration for numerous hyperparameters. PySIRTEM is a spatiotemporal SEIR-based epidemic simulation platform that provides high resolution analysis of viral disease progression and mitigation. Based on the authors-developed Matlab© simulator SIRTEM, PySIRTEM’s modular design reflects key health processes, including infection, testing, immunity, and hospitalization, enabling flexible manipulation of transition rates. Unlike SIRTEM, PySIRTEM uses a Sequential Monte Carlo (SMC) particle filter to dynamically learn epidemiological parameters using historical COVID-19 data from several U.S. states. The improved accuracy (by orders of magnitude) make PySIRTEM ideal for informed decision-making by detecting outbreaks and fluctuations. We further demonstrate PySIRTEM ’s usability performing a factorial analysis to assess the impact of different hyperparameter configurations on the predicted epidemic dynamics. Finally, we analyze
containment scenarios with varying trends, showcasing PySIRTEM ’s adaptability and effectiveness. pdfiHeap: Generalized heap module with case studies in Marketing and Emergency Room Services Aniruddha Mukherjee and Vernon Rego (Purdue University) Program Track: Modeling Methodology Program Tag: Python Abstract AbstractWe introduce a novel iheap module, a flexible Python library for heap operations. The iheap module introduces a generalized comparator function, making it more flexible and general compared to the heapq that is commonly used. We demonstrate that the iheap module achieves parity in terms of time complexity and memory usage against established standard heap modules in Python. Furthermore, the iheap module provides advanced methods and customization options unavailable in its counterparts, which enable the user to implement the heap operations with greater flexibility and control. We demonstrate the iheap module through two case studies. The first case study focuses on the efficient allocation of funds across marketing campaigns with uncertain returns. The second case study focuses on patient triaging and scheduling in the emergency rooms of hospitals. The iheap module provides a powerful and easy-to-use tool for heap operations commonly used in simulation studies. pdfA Review of Key Risk Modeling Techniques for Oil and Gas Offshore Decommissioning Projects MD Kashfin, Oumaima Larif, Suresh Khator, and Weihang Zhu (University of Houston) Program Track: Modeling Methodology Abstract AbstractNeglecting the dangers of decommissioning offshore gas and oil facilities can lead to significant environmental destruction, safety risks, legal issues, and other costs that are likely to increase as these outdated structures deteriorate. This paper provides a comprehensive review of risk modeling techniques employed in offshore oil and gas decommissioning projects. Through systematic literature analysis, relevant papers published between 2015 and 2025, the key risk modeling approaches: multi-criteria decision-making frameworks, Bayesian network applications, and sensor/digital-twin/physics-based modeling techniques, were identified. The paper evaluates each type of method's advantages, limitations, and application scope under different decommissioning scenarios. Despite significant advances in risk modeling, persistent gaps exist due to the uncertainty and complexity of the projects. This review serves as a foundation for understanding the current state of risk modeling in offshore decommissioning while identifying critical areas for future research development. pdf Cyber-Physical Systems, DEVS, Modeling MethodologyModeling Methods Session Chair: Irene Aldridge (Cornell University, AbleMarkets) A Method for Fmi and Devs for Co-simulation Ritvik Joshi (Carleton University, Blackberry QNX); James Nutaro (Oak Ridge National Lab); Gabriel Wainer (Carleton University); and Bernard Zeigler and Doohwan Kim (RTSync Corp) Program Track: Modeling Methodology Program Tags: Cyber-Physical Systems, DEVS Abstract AbstractThe need for standardized exchange of dynamic models led to the Functional Mockup Interface (FMI), which facilitates model exchange and co-simulation across multiple tools. Integration of this standard with modeling and simulation formalism enhances interoperability and provides opportunities for collaboration. This research presents an approach for the integration of FMI and Discrete Event System Specification (DEVS). DEVS provides the modularity required for seamlessly integrating the shared model. We proposed a framework for exporting and co-simulating DEVS models as well as for importing and co-simulating continuous-time models using the FMI standard. We present a case study that shows the use of this framework to simulate the steering system of an Unmanned Ground Vehicle (UGV). pdfContent Considerations for Simulation Conceptual Model Documentation Susan Aros (Naval Postgraduate School) Program Track: Modeling Methodology Abstract AbstractA simulation conceptual model (SCM) has many uses such as supporting clarification of the model design across stakeholders, guiding the computer model development, and supporting verification and validation efforts; however, proper documentation of the SCM is critical. SCM literature covers approaches and methods for developing SCMs, and for documenting SCMs, but literature directly addressing content considerations for SCM documentation is sparse. In this paper we address the determination of what content should be include in SCM documentation to maximize its utility. We first synthesize information across the literature, creating a superset of types of SCM content, and also discuss different audiences and their purposes for SCM documentation. We then provide a first-pass analysis of what types of content are most important to include in SCM documentation for different audiences and purposes, offering insights for determining which types of content to include in any given model's documentation. pdfPositioning Game Science for Gaming and Simulation: A Reflection on the Research Philosophical Underpinnings Sebastiaan Meijer (KTH Royal Institute of Technology), Heide Lukosch (University of Canterbury), Marieke de WIjse - van Heeswijk (Radboud University), and Jan Klabbers (KPMC) Program Track: Modeling Methodology Abstract AbstractThis article provides an integrated epistemological and ontological frame of reference for scholars and practitioners in the field of gaming and simulation. This frame lies the foundation for an advancement in the field by developing a definition of game science. The article defines what games are (artifacts consisting of rules, roles, and resources), how they contribute to knowledge creation (‘knowing’ as doing) and how they connect the analytical and design sciences. Gaming and simulation are means to understand and design complex realities and therefore are an important means for actors to deal with the complexities of our world. However, we experience a stagnation of the science level in our discipline in a fast-moving world. To enable future and emerging gaming and simulation researchers to practice game science in a rigorous way, we have formulated statements and used an existing science framework to define game science as discipline. pdf
Project Management and Construction Track Coordinator - Project Management and Construction: Jing Du (University of Florida), Joseph Louis (Oregon State University) Neural Networks, Python, Project Management and ConstructionAI and Simulation in the Built Environment Session Chair: Joseph Louis (Oregon State University) Simulation-Driven Reinforcement Learning vs. Search-Based Optimization for Real-Time Control in Construction Manufacturing Ian Flood and Madison E. Hill (University of Florida) Program Track: Project Management and Construction Abstract AbstractThis paper presents a comparative study of AI-based decision agents for real-time control in construction manufacturing, a field traditionally ill-suited to mass production due to high customization and uncertainty. Three approaches are developed and evaluated: an experience-based agent trained through reinforcement learning (EDA), a search-based agent using local stochastic sampling (SDA), and a hybrid combining both strategies (HybridDA). The goal is to minimize component delivery delays. A simulation model of a real precast concrete factory helps generate training patterns and assess production performance. Results show that the EDA significantly outperforms the SDA, despite relying on generalized experience rather than task-specific knowledge. The HybridDA provides modest gains, especially as search effort increases. These findings highlight the unexpected effectiveness of experience-based agents in dynamic manufacturing environments and the potential of hybrid approaches. Future work will explore alternative learning strategies, extend the scope of manufacturing decisions, and evaluate latency performance across the agents. pdfHuman AI Metacognitive Alignment in Agent-Based Drone Simulation: A New Analysis Methodology for Belief Updating Best Contributed Theoretical Paper - Finalist Bowen Sun, Jiahao Wu, and Jing Du (University of Florida) Program Track: Project Management and Construction Abstract AbstractAutonomous drones play a critical role in dynamic environments, with effective decision making hinging on timely and accurate belief updating. However, the mechanisms underlying AI belief updating, especially under uncertainty and partial observability, remain unclear. This study introduces a novel framework combining Bayesian inference, evidence accumulation theory, and Dynamic Time Warping (DTW) to analyze the AI belief updating process. Experimental validation conducted using simulated urban search and rescue (USAR) scenarios reveals that drone decisions robustly prioritize environmental factors like wind conditions. Moreover, the duration of belief updates inversely relates to environmental change intensity, showcasing adaptive cognition like humans. A dual threshold system that captures belief updates independent of behavior changes, differentiating internal cognitive shifts from observable behavioral actions, thereby enhancing alignment with human cognitive patterns. These findings offer foundational insights contributing toward transparent, adaptive, and human-aligned AI. pdfGenVision: Enhancing Construction Safety Monitoring with Synthetic Image Generation Jiuyi Xu (Colorado School of Mines), Meida Chen (USC Institute for Creative Technologies), and Yangming Shi (Colorado School of Mines) Program Track: Project Management and Construction Program Tags: Neural Networks, Python Abstract AbstractThe development of object detection models for construction safety is often limited by the availability of high-quality, annotated datasets. This study explores the use of synthetic images generated by DALL·E 3 to supplement or partially replace real data in training YOLOv8 for detecting construction-related objects. We compare three dataset configurations: real-only, synthetic-only, and a mixed set of real and synthetic images. Experimental results show that the mixed dataset consistently outperforms the other two across all evaluation metrics, including precision, recall, IoU, and mAP@0.5. Notably, detection performance for occluded or ambiguous objects such as safety helmets and vests improves with synthetic data augmentation. While the synthetic-only model shows reasonable accuracy, domain differences limit its effectiveness when used alone. These findings suggest that high-quality synthetic data can reduce reliance on real-world data and enhance model generalization, offering a scalable approach for improving construction site safety monitoring systems. pdf Complex Systems, Data Driven, Python, Resiliency, Project Management and ConstructionSimulation for Infrastructure Resilience Session Chair: Ian Flood (University of Florida) Discrete Event Simulation for Assessing the Impact of Bus Fleet Electrification on Service Reliability Best Contributed Applied Paper - Finalist Minjie Xia, Wenying Ji, and Jie Xu (George Mason University) Program Track: Project Management and Construction Program Tags: Complex Systems, Data Driven, Python, Resiliency Abstract AbstractThis paper aims to derive a simulation model to evaluate the impact of bus fleet electrification on service reliability. At its core, the model features a micro discrete event simulation (DES) of an urban bus network, integrating a route-level bus operation module and a stop-level passenger travel behavior module. Key reliability indicators—bus headway deviation ratio, excess passenger waiting time, and abandonment rate—are computed to assess how varying levels of electrification influence service reliability. A case study of route 35 operated by DASH in Alexandria, VA, USA is conducted to demonstrate the applicability and interpretability of the developed DES model. The results reveal trade-offs between bus fleet electrification and service reliability, highlighting the role of operational constraints and characteristics of electric buses (EBs). This research provides transit agencies with a data-driven tool for evaluating electrification strategies while maintaining reliable and passenger-centered service. pdfThe Influence of Information Source and Modality on Flood Risk Perception and Preparedness Subashree Dinesh, Armita Dabiri, and Amir Behzadan (University of Colorado) Program Track: Project Management and Construction Abstract AbstractFlood risk communication influences how the public perceives hazards and motivates preparedness actions, like taking preventive measures, purchasing insurance, and making informed property decisions. Prior research suggests that both the format and source of information can influence how people interpret risk. This study investigates how flood map types and sources influence individuals’ risk perception and decision-making. Using a randomized control trail (N = 796), participants were assigned to view one of the several flood risk representations sourced from governmental agencies, a nonprofit, and crowdsourced images of past floods, or plain geographical maps. Participants then self-reported their risk perceptions and behavioral intentions. This study compares visual formats and sources to reveal how communication strategies influence public understanding and preparedness, guiding more effective flood risk messaging. pdf Data Driven, Distributed, Validation, Project Management and ConstructionSimulation for Construction Operations Session Chair: Yangming Shi (Colorado School of Mines) ConStrobe - Construction Operations Simulation for Time and Resource Based Evaluations Joseph Louis (Oregon State University) Program Track: Project Management and Construction Program Tag: Distributed Abstract AbstractThis paper introduces ConStrobe – Construction Operations Simulation for Time and Resource Based Evaluations – which is a simulation software that builds upon knowledge in construction field operations simulation by providing the capabilities of running High Level Architecture (HLA)-compliant distributed simulations and being amenable to automation from external programs written in the Python language for two-way communication with external data sources. These features are provided to overcome some of the major limitations of existing construction operations simulation tools that have hindered their widespread adoption by industry. The framework of this software is explained along with a sample demonstration case to provide users with an overview of its capabilities and understanding of its working. It is anticipated that the novel capabilities of ConStrobe can reduce the time and effort required to create simulations to enable process analysis for decision-making under uncertainty for complex operations in the construction and built operations domain. pdfOptimizing Precast Concrete Production: a Discrete-event Simulation Approach with Simphony Julie Munoz, Mohamad Itani, Mohammad Elahi, Anas Itani, and Yasser Mohamed (University of Alberta) Program Track: Project Management and Construction Program Tags: Data Driven, Validation Abstract AbstractPrecast concrete manufacturers increasingly face throughput bottlenecks as market demand rises and curing-area capacity reaches its limit. This paper develops a validated discrete-event simulation (DES) model of a Canadian precast panel plant using the Simphony platform. Field observations, time studies, and staff interviews supply task durations, resource data, and variability distributions. After verification and validation against production logs, two improvement scenarios are tested: (1) doubling curing beds and (2) halving curing time with steam curing. Scenario A reduces total cycle time by 26 %, while Scenario B achieves a 24 % reduction and lowers curing-bed utilization by 5 %. Both scenarios cut crane waiting and queue lengths, demonstrating that relieving the curing bottleneck drives system-wide gains. The study confirms DES as an effective, low-risk decision-support tool for off-site construction, offering plant managers clear, data-driven guidance for investment planning and lean implementation. pdf
Reliability Modeling and Simulation Track Coordinator - Reliability Modeling and Simulation: Sanja Lazarova-Molnar (University of Southern Denmark, Karlsruhe Institute of Technology), Xueping Li (University of Tennessee), Femi Omitaomu (Oak Ridge National Laboratory) Reliability Modeling and SimulationIntelligent and Sustainable Autonomous Systems Session Chair: Jing Du (University of Florida) Reliability Assessment of Convolutional Autoencoder-Based Wind Modeling for Autonomous Drone Training Jiahao Wu, Hengxu You, and Jing Du (University of Florida) Program Track: Reliability Modeling and Simulation Abstract AbstractComputational Fluid Dynamics (CFD) is a widely used method for wind modeling in autonomous drone training, yet its computational expense limits real-time applications. This study explores the reliability of a convolutional autoencoder-based approach as a replacement for CFD in wind field generation. A convolutional autoencoder is trained on CFD-generated wind data to learn its spatial and dynamic distribution patterns, enabling the generation of realistic wind fields with significantly lower computational costs. The generated wind fields are then used to train reinforcement learning (RL)-based drones, with their policies evaluated in real CFD environments. Results demonstrate that the convolutional autoencoder accurately replicates CFD wind patterns, supports stable drone navigation, and facilitates seamless transferability of RL-trained policies from autoencoder-generated wind environments to CFD-based environments. These findings highlight the convolutional autoencoder’s potential as a computationally efficient and reliable alternative to traditional CFD in autonomous drone simulations. pdfOperational Simulation of Multi-functional Charging Station for Sustainable Transportation Jeremiah Gbadegoye (University of Tennessee, Knoxville); Yang Chen and Olufemi A. Omitaomu (Oak Ridge National Lab.); and Xueping Li (University of Tennessee, Knoxville) Program Track: Reliability Modeling and Simulation Abstract AbstractAs transportation systems move toward electrification and decarbonization, multifunctional charging stations (MFCS) are emerging as key infrastructure for electric vehicles (EVs) and hydrogen fuel cell vehicles (HFCVs). This paper presents a simulation model of an MFCS that integrates solar photovoltaic (PV), wind power, battery storage, hydrogen (H2) production, dual-pressure H2 storage, fuel cells, and dynamic grid interactions. The model simulates daily operations using 5-minute resolution data to capture realtime variability in renewable energy (RE), demand, and electricity prices. A flexible dispatch algorithm dynamically allocates energy for EV charging, H2 production, storage, and grid transactions while respecting system constraints. Results show that the MFCS effectively prioritizes RE usage, minimizes waste, meets diverse energy demands, and achieves net operational profit. The model serves as a valuable decision-support tool for designing and optimizing integrated clean energy hubs for zero-emission transportation. pdf Data Driven, DOE, Python, SIMUL8, Reliability Modeling and SimulationSimulation and Digital Twins for Sustainable Infrastructure Systems Session Chair: Omar Mostafa (Karlsruhe Institute of Technology (KIT), Institute of Applied Informatics and Formal Description Methods (AIFB)) Automating Traffic Microsimulation from Synchro UTDF to SOMO Xiangyong Luo (ORNL); Yiran Zhang (University of Washington); Guanhao Xu, Wan Li, and Chieh Ross Wang (Oak Ridge National Laboratory); and Xuesong Simon Zhou (Arizona State University) Program Track: Reliability Modeling and Simulation Program Tags: DOE, Python Abstract AbstractModern transportation research relies on seamlessly integrating traffic signal data with robust network representation and simulation tools. This study presents utdf2gmns, an open-source Python tool that automates conversion of the Universal Traffic Data Format, including network representation, signalized intersections, and turning volumes into the General Modeling Network Specification (GMNS) Standard. The resulting GMNS-compliant network can be converted for microsimulation in SUMO. By automatically extracting intersection control parameters and aligning them with GMNS conventions, utdf2gmns minimizes manual preprocessing and data loss. utdf2gmns also integrates with the Sigma-X engine to extract and visualize key traffic control metrics, such as phasing diagrams, turning volumes, volume-to-capacity ratios, and control delays. This streamlined workflow enables efficient scenario testing, accurate model building, and consistent data management. Validated through case studies, utdf2gmns reliably models complex urban corridors, promoting reproducibility and standardization. Documentation is available on GitHub and PyPI, supporting easy integration and community engagement. pdfData Requirements for Reliability-Oriented Digital Twins of Energy Systems: A Case Study Analysis Omar Mostafa (Karlsruhe Institute of Technology (KIT)) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology (KIT), University of Southern Denmark (SDU)) Program Track: Reliability Modeling and Simulation Program Tag: Data Driven Abstract AbstractEnsuring reliability of energy systems is critical for maintaining a secure and adequate energy supply, especially as the integration of renewable energy increases systems’ complexity and variability. Digital Twins offer a promising approach for data-driven reliability assessment and decision support in energy systems. Digital Twins provide decision support by dynamically modeling and analyzing system reliability using real-time data to create a digital replica of the physical counterpart. As modern energy systems generate vast amounts of data, it is essential to precisely define the data required for enabling Digital Twins for their reliability assessment. In this paper, we systematically investigate the data requirements for reliability-oriented Digital Twins for energy systems and propose a structured categorization of these requirements. To illustrate our findings, we present a case study demonstrating the link between data and model extraction for enhancing system reliability. pdfSimulating the dynamic interaction between fleet performance and maintenance processes based on Remaining Useful Life Christoph Werner (Minitab) Program Track: Reliability Modeling and Simulation Program Tags: Python, SIMUL8 Abstract AbstractFleet planning is often challenging. Especially, the dynamic interaction with maintenance entails various uncertainties. Predicting the arrivals into maintenance processes requires an understanding of fleet performance over time while, in turn, delays of repairs severely impact fleet performance and deterioration.
This feedback loop has been neglected so far, which is why we present a novel framework using a ‘rolling window’ machine learning model to predict the inputs into a discrete event simulation (DES) of repair activities based on remaining useful life (RUL). Our ‘fleet tracker’ then uses the DES outputs to simulate fleet performance together with environmental and mission-based factors which form the inputs for predicting RUL. Finally, explainable ML helps decision-makers construct relevant ‘what if’ scenarios. As a motivating example, we consider helicopters in search and rescue missions and their maintenance. As a key result, we compare two scenarios of repair turnaround times and their impact on RUL decline. pdf
Scientific AI and Applications Track Coordinator - Scientific AI and Applications: Rafael Mayo-García (CIEMAT), Esteban Mocskos (CSC-CONICET, University of Buenos Aires (AR)) Data Analytics, DOE, Metamodeling, Output Analysis, R, Analysis Methodology, Scientific AI and ApplicationsAnalysis Methods and Algorithms Session Chair: Rafael Mayo-García (CIEMAT) Exploiting Functional Data for Combat Simulation Sensitivity Analysis Lisa Joanne Blumson, Charlie Peter, and Andrew Gill (Defence Science and Technology Group) Program Track: Analysis Methodology Program Tags: Data Analytics, DOE, Metamodeling, Output Analysis, R Abstract AbstractComputationally expensive combat simulations are often used to inform military decision-making and sensitivity analyses enable the quantification of the effect of military capabilities or tactics on combat mission effectiveness. The sensitivity analysis is performed using a meta-model approximating the simulation's input-output relationship and the output data that most combat meta-models are fitted to correspond to end-of-run mission effectiveness measures. However during execution, a simulation records a large array of temporal data. This paper seeks to examine whether functional combat meta-models fitted to this temporal data, and the subsequent sensitivity analysis, could provide a richer characterization of the effect of military capabilities or tactics. An approach from Functional Data Analysis will be used to illustrate the potential benefits on a case study involving a closed-loop, stochastic land combat simulation. pdfPreconditioning a Restarted GMRES Solver Using the Randomized SVD José A. Moríñigo, Andrés Bustos, and Rafael Mayo-García (CIEMAT) Program Track: Scientific AI and Applications Abstract AbstractThe performance of a CPU-only implementation of the restarted GMRES algorithm with direct randomized-SVD -based preconditioning has been analyzed. The method has been tested on a set of sparse and dense matrices exhibiting varying spectral properties and compared to the ILU(0) -based preconditioning. This comparison aims to assess the advantages and drawbacks of both approaches. The trade-off between iteration-to-solution and time-to-solution metrics is discussed, demonstrating that the proposed method achieves an improved convergence rate in terms of iterations. Additionally, the method’s competitiveness with respect to both metrics is discussed within the context of several relevant scenarios, particularly those where GMRES-based simulation techniques are applicable. pdf
Simulation and Artificial Intelligence Track Coordinator - Simulation and Artificial Intelligence: Edward Y. Hua (MITRE Corporation), Dehghani Mohammad (Northeastern University), Yijie Peng (Peking University) C++, Complex Systems, Conceptual Modeling, Data Analytics, Output Analysis, System Dynamics, Simulation and Artificial IntelligenceGenerative AI and Simulation I Session Chair: Negar Sadeghi (Northeastern University) LLM Prompt Engineering for Performance in Simulation Software Development: A Metric-Based Approach to Using LLMs James F. Leathrum, Abigail S. Berardi, and Yuzhong Shen (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tag: C++ Abstract AbstractSimulation software development is crucial for advancing digital engineering, scientific research, and innovation. The emergence of Generative AI, especially Large Language Models (LLMs), introduces possibilities for automating code generation, documentation, testing, and refactoring. Prompt engineering has become a key method of translating human intent into automated software output. However, integrating LLMs into software workflows requires reliable evaluation methods. This paper proposes a metric-based framework that uses software metrics to assess and guide LLM integration. Treating prompts as first-class artifacts, the framework supports improvements in code quality, maintainability, and developer efficiency. Performance, measured by execution time relative to a known high-performance codebase, is used as the initial metric to study. Work focuses on the impact of assigning roles in prompts and refining prompt engineering strategies to generate high-performance software through structured preambles. The work provides the foundation for LLM generated software starting from a well-defined simulation model. pdfToward Automating System Dynamics Modeling: Evaluating LLMs in the Transition from Narratives to Formal Structures Jhon G. Botello (Virginia Modeling, Analysis, and Simulation Center) and Brian Llinas, Jose Padilla, and Erika Frydenlund (Old Dominion University) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Conceptual Modeling, Output Analysis, System Dynamics Abstract AbstractTransitioning from narratives to formal system dynamics (SD) models is a complex task that involves identifying variables, their interconnections, feedback loops, and the dynamic behaviors they exhibit. This paper investigates how large language models (LLMs), specifically GPT-4o, can support this process by bridging narratives and formal SD structures. We compare zero-shot prompting with chain-of-thought (CoT) iterations using three case studies based on well-known system archetypes. We evaluate the LLM’s ability to identify the systemic structures, variables, causal links, polarities, and feedback loop patterns. We present both quantitative and qualitative assessments of the results. Our study demonstrates the potential of guided reasoning to improve the transition from narratives to system archetypes. We also discuss the challenges of automating SD modeling, particularly in scaling to more complex systems, and propose future directions for advancing toward automated modeling and simulation in SD assisted by AI. pdfPerformance of LLMs on Stochastic Modeling Operations Research Problems: From Theory to Practice Yuhang Wu, Akshit Kumar, Tianyi Peng, and Assaf Zeevi (Columbia University) Program Track: Simulation and Artificial Intelligence Program Tag: Data Analytics Abstract AbstractLarge language models (LLMs) have exhibited capabilities comparable to those of human experts in various fields. However, their modeling abilities—the process of converting real-life problems (or their verbal descriptions) into sensible mathematical models—have been underexplored. In this work, we take the first step to evaluate LLMs’ abilities to solve stochastic modeling problems, a model class at the core of Operations Research (OR) and decision-making more broadly. We manually procure a representative set of graduate-level homework and doctoral qualification-exam problems and test LLMs’ abilities to solve them. We further leverage SimOpt, an open-source library of simulation-optimization problems and solvers, to investigate LLMs’ abilities to make real-world decisions. Our results show that, though a nontrivial amount of work is still needed to reliably automate the stochastic modeling pipeline in reality, state-of-the-art LLMs demonstrate proficiency on par with human experts in both classroom and practical settings. pdf Simulation and Artificial IntelligencePanel: Look to the Future! Simulation 2050 and Beyond Session Chair: Dehghani Mohammad (Northeastern University) Look to the Future! Simulation 2050 and Beyond Hans Ehm (Infineon Technologies AG), Dennis Pegden (Simio LLC), Hessam Sarjoughian (Arizona State University), Andreas Tolk and Edward Hua (The MITRE Corporation), and Mohammad Dehghanimohammadabadi (Northeastern University) Program Track: Simulation and Artificial Intelligence Abstract AbstractThe advancement of Artificial Intelligence has accelerated the transformation of simulation from a tool for analysis into a dynamic partner in decision-making and system design. As AI systems become more capable of learning, reasoning, and adapting, simulation is evolving into an intelligent, autonomous, and predictive framework for exploring complex futures. This paper brings together future-oriented perspectives from simulation scientists and AI experts to discuss the next 25 years of innovation, collaboration, and disruption. pdf Data Driven, Monte Carlo, Neural Networks, Python, Simulation and Artificial IntelligenceMulti-Agent and Optimization Methods Session Chair: Yusef Ahsini (Universitat Politècnica de València, Bechained Artificial Intelligence Technologies) A Data-Driven Discretized CS:GO Simulation Environment to Facilitate Strategic Multi-Agent Planning Research Yunzhe Wang (University of Southern California, USC Institute for Creative Technologies); Volkan Ustun (USC Institute for Creative Technologies); and Chris McGroarty (U.S. Army Combat Capabilities Development Command - Soldier Center) Program Track: Simulation and Artificial Intelligence Program Tag: Data Driven Abstract AbstractModern simulation environments for complex multi-agent interactions must balance high-fidelity detail with computational efficiency. We present DECOY, a novel multi-agent simulator that abstracts strategic, long-horizon planning in 3D terrains into high-level discretized simulation while preserving low-level environmental fidelity. Using Counter-Strike: Global Offensive (CS:GO) as a testbed, our framework accurately simulates gameplay using only movement decisions as tactical positioning—without explicitly modeling low-level mechanics such as aiming and shooting. Central to our approach is a waypoint system that simplifies and discretizes continuous states and actions, paired with neural predictive and generative models trained on real CS:GO tournament data to reconstruct event outcomes. Extensive evaluations show that replays generated from human data in DECOY closely match those observed in the original game. Our publicly available simulation environment provides a valuable tool for advancing research in strategic multi-agent planning and behavior generation. pdfAgentic Simheuristic: Integrating generative AI and Simheuristic for Team Orienteering Problem Mohammad Peyman and Yusef Ahsini (Universitat Politècnica de València, Bechained Artificial Intelligence Technologies) Program Track: Simulation and Artificial Intelligence Abstract AbstractAddressing complex stochastic optimization problems often requires hybridization of search and evaluation methods. Simheuristics combine metaheuristics and simulation but typically rely on static control logic. Meanwhile, large language models (LLMs) offer advanced reasoning but lack robust mechanisms for constrained optimization. We propose the agentic simheuristic framework, a novel architecture that leverages an LLM as a high-level coordinator for simheuristic components. Applied to the team orienteering problem (TOP) under uncertainty, the framework employs an LLM to manage an exploratory agent for broad solution search and an exploitative agent for intensive refinement. Both agents integrate Monte Carlo simulation to evaluate solutions under uncertainty. The LLM guides the process by selecting promising and diverse exploratory solutions to seed refinement, enabling intelligent coordination within simheuristics. We present the framework architecture and provide initial empirical results on TOP benchmark instances, illustrating operational feasibility as a proof of concept and highlighting potential for explainable, AI-driven optimization. pdfGNN-Heatmap Augmented Monte Carlo Tree Search for Cloud Workflow Scheduling Best Contributed Applied Paper - Finalist Dingyu Zhou, Jiaqi Huang, Yirui Zhang, and Wai Kin (Victor) Chan (Tsinghua University) Program Track: Simulation and Artificial Intelligence Program Tags: Monte Carlo, Neural Networks, Python Abstract AbstractThis paper addresses the NP-hard cloud workflow scheduling problem by proposing a novel method that integrates Graph Neural Networks with Monte Carlo Tree Search (MCTS). Cloud workflows, represented as Directed Acyclic Graphs, present significant scheduling challenges due to complex task dependencies and heterogeneous resource requirements. Our method leverages Anisotropic Graph Neural Networks to extract structural features from workflow and create a heatmap that guides the MCTS process during both the selection and simulation phases. Extensive experiments on workflows ranging from 30 to 110 tasks demonstrate that our method outperforms rule-based algorithms, classic MCTS, and other learning-based approaches; more notably, it achieves near-optimal solutions with only a 2.56% gap from exact solutions and demonstrates exceptional scalability to completely unseen workflow sizes. This synergistic integration of neural network patterns with Monte Carlo simulation-based search not only advances cloud workflow scheduling but also offers valuable insights for simulation-based optimization across diverse domains. pdf Parallel, Python, Simulation and Artificial IntelligenceDiscrete Event Simulation and AI Session Chair: Amir Abdollahi (Northeastern University) Machine Learning Enhanced Discrete Event Simulation of a Surgical Center CHEN HE (Southwest University); Marianne Sarazin, François Dufour, and Marie Paule Bourbon (Clinique Mutualiste Saint-Etienne); and Xiaolan Xie (Mines Saint-Etienne) Program Track: Simulation and Artificial Intelligence Abstract AbstractIn discrete event simulation (DES) of surgical centers, activity durations are inherently stochastic and are typically modeled with standard probability distributions. In reality, however, actual procedure times may not conform to standard distributions, depending instead on the individualized characteristics of patients. Accounting for such covariate effects in DES is challenging, making personalized care process simulation impractical. This shortcoming can compromise model fidelity and constrain the utility of DES for precise service management. Machine learning (ML) techniques offer a promising solution by incorporating patientspecific features into duration estimates. In this study, we introduce an ML-driven simulation framework that integrates advanced predictive algorithms with a DES model of a surgical center. We benchmark our approach against conventional DES methodologies that rely solely on predefined probability distributions to represent time variability. Results show that our approach can significantly reduce the randomness of service durations and lead to a significant improvement in simulation accuracy. pdfDES-Gymnax: Fast Discrete Event Simulator in JAX Yun Hua and Jun Luo (Shanghai Jiao Tong University) and Xiangfeng Wang (East China Normal University) Program Track: Simulation and Artificial Intelligence Abstract AbstractIn this work, we introduce DES-Gymnax, a novel high-performance discrete-event simulator implemented in
JAX. By leveraging the just-in-time compilation, automatic vectorization, and GPU acceleration capabilities
of JAX, DES-Gymnax can achieve 10x to 100x times performance improvement over traditional PYTHONbased discrete-event simulators like Salabim. The proposed DES-Gymnax can feature a Gym-like API that
facilitates seamless integration with reinforcement learning algorithms, addressing a critical gap between
simulation engines and AI techniques. DES-Gymnax is validated on three benchmark models, i.e., an
M/M/1 queue, a multi-server model, and a tandem queue model. Experimental results demonstrate that
DES-Gymnax maintains simulation accuracy while significantly reducing execution time, enabling efficient
large-scale sampling crucial for reinforcement learning applications in operations research areas. The
open-source code is available in the DES-Gymnax repository (Yun, Jun, and Xiangfeng 2025). pdfScalable, Rule-Based, Parallel Discrete Event Based Agentic AI Simulations Atanu Barai, Stephan Eidenbenz, and Nandakishore Santhi (Los Alamos National Laboratory) Program Track: Simulation and Artificial Intelligence Program Tags: Parallel, Python Abstract AbstractWe introduce a novel parallel discrete event simulation (PDES)-based method to couple multiple AI and non-AI agents in a rule-based manner with dynamic constrains while ensuring correctness of output. Our coupling mechanism enables the agents to work in a co-operative environment towards a common goal while many sub-tasks run in parallel. AI agents trained on vast amounts of human data naturally model complex human behaviors and emotions easily – this is in contrast to conventional agents which need to be burdensomely complex to capture aspects of human behavior. Distributing smaller AI agents on a large heterogeneous CPU/GPU cluster enables extremely scalable simulation tapping into the collective complexity of individual smaller models, while circumventing local memory bottlenecks for model parameters. We illustrate the potential of our approach with examples from traffic simulation and robot gathering, where we find our
coupling of AI/non-AI agents improves overall fidelity. pdf AnyLogic, Neural Networks, Python, Resiliency, Simulation and Artificial IntelligenceAI in Manufacturing I Session Chair: Maryam SAADI (Airbus Group, IMT Ales) Optimizing Production Planning and Control: Reward Function Design in Reinforcement Learning Marc Wegmann, Benedikt Gruenhag, Michael Zaeh, and Christina Reuter (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tag: Resiliency Abstract AbstractProduction planning and control (PPC) is challenged by the complex and volatile environment manufacturers face. One promising approach in PPC is the application of Reinforcement Learning (RL). In RL, an intelligent agent is trained in a simulation environment based on its experiences. The behavior of the agent is trained by defining a reward function that provides positive feedback if the agent performs well and negative feedback if it does not. Accordingly, the design of the reward function determines the impact RL can have. This article deals with the challenge of how to design a suitable reward function. To do so, 8 design principles and 21 design parameters were identified based on a structured literature review. The principles and parameters were utilized to systematically derive reward function alternatives for a given PPC task. These alternatives were applied to a use case in rough production scheduling being a sub task of PPC. pdfAI-based Assembly Line Optimization in Aeronautics: a Surrogate and Genetic Algorithm Approach Maryam SAADI (Airbus Group, IMT Ales); Vincent Bernier (Airbus Group); and Gregory Zacharewicz and Nicolas Daclin (IMT) Program Track: Simulation and Artificial Intelligence Program Tags: AnyLogic, Neural Networks, Python Abstract AbstractIndustrial configuration planning requires testing many setups, which is time-consuming when each scenario must be evaluated through detailed simulation. To accelerate this process, we train a Multi-Layer Perceptron (MLP) to predict key performance indicators (KPIs) quickly, using it as a surrogate model. However, classical regression metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) do not reflect prediction quality in all situations. To solve this issue, we introduce a classification-based evaluation strategy. We define acceptable prediction margins based on business constraints, then convert the regression output into discrete classes. We assess model performance using precision and recall. This approach reveals where the model makes critical errors and helps decision-makers at Airbus Helicopters trust the AI’s predictions. pdfGraph-Enhanced Deep Reinforcement Learning for Multi-Objective Unrelated Parallel Machine Scheduling Bulent Soykan, Sean Mondesire, Ghaith Rabadi, and Grace Bochenek (University of Central Florida) Program Track: Simulation and Artificial Intelligence Abstract AbstractThe Unrelated Parallel Machine Scheduling Problem (UPMSP) with release dates, setups, and eligibility constraints presents a significant multi-objective challenge. Traditional methods struggle to balance minimizing Total Weighted Tardiness (TWT) and Total Setup Time (TST). This paper proposes a Deep Reinforcement Learning framework using Proximal Policy Optimization (PPO) and a Graph Neural Network (GNN). The GNN effectively represents the complex state of jobs, machines, and setups, allowing the PPO agent to learn a direct scheduling policy. Guided by a multi-objective reward function, the agent simultaneously minimizes TWT and TST. Experimental results on benchmark instances demonstrate that our PPO-GNN agent significantly outperforms a standard dispatching rule and a metaheuristic, achieving a superior trade-off between both objectives. This provides a robust and scalable solution for complex manufacturing scheduling. pdf Complex Systems, Neural Networks, Open Source, Python, Sampling, Simulation and Artificial IntelligenceDeep Learning and Simulation Session Chair: Sahil Belsare (Walmart, Inc. USA; Northeastern University) Sample Efficient Exploration Policy for Asynchronous Q-Learning Xinbo Shi (Peking University), Jing Dong (Columbia University), and Yijie Peng (Peking University) Program Track: Simulation and Artificial Intelligence Program Tag: Sampling Abstract AbstractThis paper investigates the sample efficient exploration policy for asynchronous Q-learning from the perspective of uncertainty quantification. Although algorithms like $\epsilon$-greedy can balance exploration and exploitation, their performances heavily depend on hyperparameter selection, and a systematic approach to designing exploration policies remains an open question. Inspired by contextual Ranking and Selection problems, we focus on optimizing the probability of correctly selecting optimal actions (PCS) rather than merely estimating Q-values accurately. We establish a novel central limit theorem for asynchronous Q-iterations, enabling the development of two strategies: (1) an optimization-based policy that seeks an optimal computing budget allocation and (2) a parameter-based policy that selects from a parametrized family of policies. Specifically, we propose minimizing an asymptotic proxy of Q-value uncertainty with regularization. Experimental results on benchmark problems, including River Swim and Machine Replacement, demonstrate that the proposed policies can effectively identify sample-efficient exploration strategies. pdfMulti-agent Market Simulation for Deep Reinforcement Learning With High-Frequency Historical Order Streams David Byrd (Bowdoin College) Program Track: Simulation and Artificial Intelligence Program Tags: Complex Systems, Open Source, Python Abstract AbstractAs artificial intelligence rapidly co-evolves with complex modern systems, new simulation frameworks are needed to explore the potential impacts. In this article, I introduce a novel open source multi-agent financial market simulation powered by raw historical order streams at nanosecond resolution. The simulation is particularly targeted at deep reinforcement learning, but also includes momentum, noise, order book imbalance, and value traders, any number and type of which may simultaneously trade against one another and the historical order stream within the limit order books of the simulated exchange. The simulation includes variable message latency, automatic agent computation delays sampled in real time, and built-in tools for performance logging, statistical analysis, and plotting. I present the simulation features and design, demonstrate the framework on a multipart DeepRL use case with continuous actions and observations, and discuss potential future work. pdfTask-Aware Multi-Expert Architectures for Lifelong Deep Learning Jianyu Wang and JACOB NEAN-HUA SHEIKH (George Mason University), Cat P. Le (Duke University), and Hoda Bidkhori (George Mason University) Program Track: Simulation and Artificial Intelligence Program Tag: Neural Networks Abstract AbstractLifelong deep learning aims to enable neural networks to continuously learn across tasks while retaining previously acquired knowledge. This paper introduces an algorithm, Task-Aware Multi-Expert (TAME), which facilitates incremental and collaborative learning by leveraging task similarity to guide expert model selection and knowledge transfer. TAME retains a pool of pretrained neural networks and selectively activates the most relevant expert based on task similarity metrics. A shared dense layer then utilizes the appropriate expert's knowledge to generate a prediction. To mitigate catastrophic forgetting, TAME incorporates a limited-capacity replay buffer that stores representative samples and their embeddings from each task. Furthermore, an attention mechanism is integrated to dynamically prioritize the most relevant stored knowledge for each new task. The proposed algorithm is both flexible and adaptable across diverse learning scenarios. Experimental results on classification tasks derived from CIFAR-100 demonstrate that TAME significantly enhances classification performance while preserving knowledge across evolving task sequences. pdf Data Driven, DEVS, Input Modeling, Python, Simulation and Artificial IntelligenceGenerative AI and Simulation II Session Chair: Hessam Sarjoughian (Arizona State University) AURORA: Enhancing Synthetic Population Realism Through RAG and Salience-Aware Opinion Modeling rebecca marigliano and Kathleen Carley (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tags: Data Driven, Input Modeling, Python Abstract AbstractSimulating realistic populations for strategic influence and social-cyber modeling requires agents that are demographically grounded, emotionally expressive, and contextually coherent. Existing agent-based models often fail to capture the psychological and ideological diversity found in real-world populations. This paper introduces AURORA, a Retrieval-Augmented Generation (RAG)-enhanced framework that leverages large language models (LLMs), semantic vector search, and salience-aware topic modeling to construct synthetic communities and personas. We compare two opinion modeling strategies and evaluate three LLMs—gemini-2.0-flash, deepseek-chat, and gpt-4o-mini—in generating emotionally and ideologically varied agents. Results show that community-guided strategies improve meso-level opinion realism, and LLM selection significantly affects persona traits and emotions. These findings demonstrate that principled LLM integration and salience-aware modeling can enhance the realism and strategic utility of synthetic populations for simulating narrative diffusion, belief change, and social response in complex information environments. pdfTemporal Diffusion Models From Parallel DEVS Models: A Generative-AI Approach for Semiconductor Fabrication Manufacturing Systems Vamsi Krishna Pendyala and Hessam S. Sarjoughian (Arizona State University) and Edward J. Yellig (Intel Corporation) Program Track: Simulation and Artificial Intelligence Program Tag: DEVS Abstract AbstractGenerative-AI models offer powerful capabilities for learning complex dynamics and generating high-fidelity synthetic data. In this work, we propose Conditional Temporal Diffusion (CTD) models for generating wafer fabrication time-series trajectories conditioned on static factory configurations. The model is trained using data from a Parallel Discrete Event System Specification (PDEVS)-based MiniFab benchmark model, which simulates different steps of a semiconductor manufacturing process and captures the wafer processing dynamics (e.g., throughput \& turnaround time). These simulations incorporate multiscale, realistic behaviors such as preventive maintenance and wafer dispatching under both uniform and sinusoidal generation patterns. CTD models are conditioned on static covariates, including wafer composition, lot sizes, repair type, and wafer generator mode of the factory. Experimental evaluations demonstrate that the synthetic outputs achieve high fidelity with average errors below 15\% while significantly reducing data generation time. This highlights CTD’s effectiveness as a scalable and efficient surrogate for complex manufacturing simulations. pdfAn Empirical Study of Generative Models as Input Models for Simulation Zhou Miao (The Hong Kong Polytechnic University) and Zhiyuan Huang and Zhaolin Hu (Tongji University) Program Track: Simulation and Artificial Intelligence Program Tags: Input Modeling, Python Abstract AbstractInput modeling is pivotal for generating realistic data that mirrors real-world variables for simulation, yet traditional parametric methods often fail to capture complex dependencies. This study investigates the efficacy of modern generative models—such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Normalizing Flows, and Diffusion Models—in addressing these challenges. Through systematic experiments on synthetic and real-world datasets, we evaluate their performance using metrics like Wasserstein distance and quantile loss. Our findings reveal that VAEs and Denoising Diffusion Probabilistic Models (DDPMs) consistently outperform other models, particularly in capturing nonlinear relationships, while GAN-based approaches exhibit instability. These results provide practical insights for practitioners, highlighting models that deliver reliable performance without extensive customization, and outline promising directions for future research in simulation input modeling. pdf Cybersecurity, Data Driven, Monte Carlo, Simulation and Artificial IntelligenceAI Applications Session Chair: Ranjan Pal (Massachusetts Institute of Technology) Guiding Program Synthesis With Monte Carlo Tree Search Gongbo Zhang and Yijie Peng (Peking University) Program Track: Simulation and Artificial Intelligence Abstract AbstractGenetic Programming (GP) is a powerful evolutionary computation technique for automatic program synthesis, but it suffers from inefficient search and solution bloat. This paper proposes using Monte Carlo Tree Search (MCTS) to directly construct programs by formulating synthesis as a sequential decision problem modeled as a Markov Decision Process. We tailor the MCTS algorithm to the GP domain and demonstrate its theoretical consistency. Empirical evaluation on symbolic regression benchmarks shows that our approach consistently outperforms traditional GP, discovering higher-quality solutions with more compact structural complexity and suggesting that MCTS is a promising alternative for automatic program generation. pdfAI on Small and Noisy Data is Ineffective For ICS Cyber Risk Management Best Contributed Theoretical Paper - Finalist Yaphet Lemiesa, Ranjan Pal, and Michael Siegel (Massachusetts Institute of Technology) Program Track: Simulation and Artificial Intelligence Program Tags: Cybersecurity, Data Driven, Monte Carlo Abstract AbstractModern industrial control systems (ICSs) are increasingly relying upon IoT and CPS technology to improve cost-effective service performance at scale. Consequently, the cyber vulnerability terrain is largely amplified in ICSs. Unfortunately, the historical lack of (a) sufficient, non-noisy ICS cyber incident data, and (b) intelligent operational business processes to collect and analyze available ICS cyber incident data, demands the attention of the Bayesian AI community to develop cyber risk management (CRM) tools to address these challenges. In this paper we show with sufficient Monte Carlo simulation evidence that Bayesian AI on noisy (and small) ICS cyber incident data is ineffective for CRM. More specifically, we show via a novel graphical sensitivity analysis methodology that even small amounts of statistical noise in cyber incident data are sufficient to reduce ICS intrusion/anomaly detection performance by a significant percentage. Hence, ICS management processes should strive to collect sufficient non-noisy cyber incident data. pdfSustaining Capital-boosted Cyber Reinsurance Markets Using Cat Bonds Ranjan Pal (Massachusetts Institute of Technology), Bodhibrata Nag (Indian Institute of Management Calcutta), and Sander Zeijlemaker and Michael Siegel (Massachusetts Institute of Technology) Program Track: Simulation and Artificial Intelligence Abstract AbstractCyber insurance (CI) markets improve the cybersecurity of companies (enterprises) in theory. This theory is increasingly being validated in practice in multiple enterprises that buy cyber insurance. However, the supply-demand gap in the CI market is huge and stretches over a trillion dollars. The primary reason being that cyber reinsurance companies do not have sufficient capital to serve their CI company clients through profitable cyber-risk portfolio diversification. This capital problem is subsequently rooted in the fundamental challenge of enterprise cyber posture information asymmetry that has plagued the CI industry since its inception. A radical capital-boosting mechanism proposed by researchers and deployed within the industry in the last two years is an insurance-linked security (ILS) product such as a catastrophic (CAT) bond. In this paper, we present arguments on why, how, and when CAT bonds help sustain capital-boosted
reinsurance markets complemented by innovative AI/ML-driven inside-out cyber posture scanning solutions. pdf C++, Complex Systems, Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks, Simulation and Artificial IntelligenceAI in Manufacturing II Session Chair: Michael Brown (University of Central Florida) Simulation-based Design of the LENR System John Richard Clymer (John R Clymer & Associates), Amar Vakil (Search Data Laboratory), and Keryn Johnson (3Quantum Tech. Limited and IMU LLC) Program Track: Simulation and Artificial Intelligence Program Tags: C++, Complex Systems Abstract AbstractThe Information Universe (IU) communicates with the Material Universe (MU) to create and repair atoms. This is required because quarks and bosons that make up atoms have a relatively short life and must be replaced. The communication messages are described by a context-sensitive language specified using message generating rules. The SUSY (Supersymmetric) inversion model is a process defined by these rules that describes how subatomic particles are made and combined to create or repair atoms; indeed, there is a language message (a sequence of process actions) for every IU/MU system regulatory problem. An OpEMCSS (Operational Evaluation Model for Complex Sensitive Systems) simulation model of the IU/MU system can learn these rules to gain an understanding of the SUSY messaging process. The IU/MU system simulation model will be used to learn and generate messages that result in the LENR (Low Energy Nuclear Reaction) system producing useful new physics. pdf3D Vision Based Anti.Collision System for Automatic Load Movements with Tower Cranes - A Simulation Oriented Development Process Alexander Schock-Schmidtke, Gonzalo Bernabé Caparrós, and Johannes Fottner (Technical University of Munich) Program Track: Simulation and Artificial Intelligence Program Tags: Conceptual Modeling, Cyber-Physical Systems, Data Driven, Neural Networks Abstract AbstractThis paper presents a simulation-driven development approach for a camera-based anti-collision system designed for automated tower cranes. Stereo camera systems mounted directly on the crane's hook generate real-time 3D point clouds to detect people in the immediate danger zone of suspended loads. A virtual construction site was implemented in a game engine to simulate dynamic scenarios and varying weather conditions. The system utilizes a neural network for pedestrian detection and computes the minimum distance between load and detected persons. A closed-loop architecture enables real-time data exchange between simulation and processing components and allows easy transition to real-world cranes. The system was evaluated under different visibility conditions, showing high detection accuracy in clear weather and degraded performance in fog and rain due to the limitations of stereo vision. The results demonstrate the feasibility of using synthetic environments and point cloud-based perception to develop safety-critical assistance systems in construction automation. pdfFabSim: A Micro-Discrete Event Simulator for Machine Learning Dynamic Job Shop Scheduling Optimizers Sean Mondesire, Michael Brown, and Bulent Soykan (University of Central Florida) Program Track: Simulation and Artificial Intelligence Abstract AbstractDynamic job shop scheduling (DJSS) demands rapid, data-driven decisions to balance conflicting objectives in complex, stochastic manufacturing environments. While Artificial Intelligence, particularly deep reinforcement learning (RL), offers powerful optimization capabilities, its development and evaluation are often limited by the lack of suitable high-speed, flexible simulation testbeds. To address this, we introduce FabSim, a micro-discrete-event simulation (micro-DES) environment purpose-built for developing and evaluating intelligent scheduling strategies for DJSS. FabSim models core scheduling dynamics, including resource contention, flexible routing, batching, and stochastic events, using an efficient event-driven kernel and indexed look-up tables that enable sub-second simulation runtimes. Compliant with the Farama Gymnasium API, it integrates seamlessly with standard machine learning libraries. FabSim facilitates reproducible benchmarking and serves as a practical back-end for digital twin applications requiring near-real-time analysis. This paper details FabSim's design, validates its high-speed performance, and demonstrates its utility for AI-driven scheduling research. pdf Data Driven, Python, Simulation and Artificial IntelligenceAI for Transportation and Networks Session Chair: Tetsuro Takahashi (Fujitsu Ltd.) A Topological Data Analysis Approach to Detecting Congestion in Pedestrian Crowds Naoyuki Kamiyama (Kyushu University), Hiroaki Yamada and Takashi Kato (Fujitsu Limited), Shizuo Kaji (Kyushu University), and Tetsuro Takahashi (Fujitsu Limited) Program Track: Simulation and Artificial Intelligence Abstract AbstractThis paper addresses the challenge of detecting heavy congestion in pedestrian crowds. While congestion detection is straightforward when monitoring predefined locations through pedestrian density measurements, identifying potential congestion areas in advance remains problematic. We propose a novel algorithm that eliminates the need for prior knowledge of congestion-prone locations by topological data analysis. Our approach transforms time-series pedestrian position data into topological representations, enabling more robust congestion detection. We validate our algorithm using both synthetic data from multi-agent simulations and real-world pedestrian measurements. The experimental results demonstrate that converting traditional positional data into topological representations significantly improves the performance of machine learning models in congestion detection tasks. pdfOut of the Past: An AI-Enabled Pipeline for Traffic Simulation from Noisy, Multimodal Detector Data and Stakeholder Feedback Rex Chen and Karen Wu (Carnegie Mellon University), John McCartney (Path Master Inc.), and Norman Sadeh and Fei Fang (Carnegie Mellon University) Program Track: Simulation and Artificial Intelligence Program Tag: Data Driven Abstract AbstractHow can a traffic simulation be designed to faithfully reflect real-world traffic conditions? One crucial step is modeling the volume of traffic demand. But past demand modeling approaches have relied on unrealistic or suboptimal heuristics, and they have failed to adequately account for the effects of noisy and multimodal data on simulation outcomes. In this work, we integrate advances in AI to construct a three-step, end-to-end pipeline for systematically modeling traffic demand from detector data: computer vision for vehicle counting from noisy camera footage, combinatorial optimization for vehicle route generation from multimodal data, and large language models for iterative simulation refinement from natural language feedback. Using a road network from Strongsville, Ohio as a testbed, we show that our pipeline accurately captures the city’s traffic patterns in a granular simulation. Beyond Strongsville, incorporating noise and multimodality makes our framework generalizable to municipalities with different levels of data and infrastructure availability. pdfSpillover-Aware Simulation Analysis for Policy Evaluation in Epidemic Networks JINGYUAN CHOU, Jiangzhuo Chen, and Madhav Marathe (University of Virginia) Program Track: Simulation and Artificial Intelligence Program Tag: Python Abstract AbstractSimulations are widely used to evaluate public health interventions, yet they often fail to quantify how interventions in one region indirectly affect others—a phenomenon known as spillover. This omission can lead to incorrect policy evaluations and misattributed effects. We propose a post-simulation framework for estimating causal spillover in spatial epidemic networks. Our method introduces a directional graph neural network (Dir-GNN) estimator that learns homophily-aware representations and estimates counterfactual outcomes under hypothetical neighbor treatments. Applied to a semi-synthetic setup built on PatchSim—a metapopulation SEIR simulator with realistic inter-county mobility—our estimator recovers spillover effects and corrects attribution errors inherent in standard evaluation. Experiments show that accounting for spillover improves treatment estimation and policy reliability. pdf
Simulation Around the World Track Coordinator - Simulation Around the World: María Julia Blas (INGAR (CONICET-UTN)), Stewart Robinson (Newcastle University) Simulation Around the WorldSimulation Applications in Health Session Chair: Stewart Robinson (Newcastle University) Capacity Analysis in a Genetic Sequencing Laboratory through Discrete Event Simulation Maria Alejandra Soriano Castañeda, Marcela Isabel Guevara Suarez, and Andrés Leonardo Medaglia Gonzalez (Universidad de los Andes) Program Track: Simulation Around the World Abstract AbstractDiscrete-event simulation is widely used in the healthcare field to optimize processes and manage resources. This study presents a DES model developed for a molecular biology laboratory in Colombia, recognized by Oxford Nanopore Technologies and specializing in Sanger and Nanopore sequencing techniques. Using real data, the model analyzes processing times for each sequencing method and estimates the laboratory’s maximum capacity under varying technician experience, equipment availability, task durations and number of samples processed. The goal is to provide genomic laboratories and regulatory stakeholders with a flexible tool to evaluate performance, the impact of alternative configurations, and support capacity planning under resource and demand constraints. pdfLeveraging Simulation to Study Health Inequalities: Insights from Systematic Literature Review and Survey of Researchers Tesfamariam Abuhay, Nisha Baral, Kalyan Pasupathy, and Ranganathan Chandrasekaran (University of Illinois Chicago) and Stewart Robinson (Newcastle University) Program Track: Simulation Around the World Abstract AbstractThis study presents the applications, challenges, and future research directions of simulation for health inequalities research based on a systematic literature review and survey of authors included in the review. pdfA First Glimpse Into Hybrid Simulation For Inpatient Healthcare Robotics Sebastian Amaya-Ceballos and Paula Escudero (Universidad EAFIT), Pamela Carreno-Medrano (Monash University), and William Guerrero (Universidad de La Sabana) Program Track: Simulation Around the World Abstract AbstractBurnout among healthcare workers in inpatient care is worsened by administrative tasks and staff shortages. Although service robots could support these environments, adoption remains limited, especially for non-clinical roles, and they lack structured methods for early-stage evaluation. This paper presents a conceptual hybrid simulation model as a foundation for pre-adoption analysis of robotic agents in inpatient care. The model integrates Discrete Event Simulation for patient flow and Agent-Based Modeling for human–robot interactions, designed using the ODD protocol and informed by expert input. It formalizes assumptions, roles, and workflow logic to support early experimentation and scenario testing. Initial simulations assess task distributions with and without robot integration, highlighting reduced documentation and assistance burdens for nurses. The conceptual model was validated through structured meetings with clinical staff and proved feasible for further development. The model includes the assumptions, structure, and logic behind the simulation, serving as a basis for future model expansion. pdf Simulation Around the WorldAdvanced Simulation Techniques for Intelligent Systems Session Chair: Tesfamariam Abuhay (University of Ilinois at Chicago) From the Past to the Future: 10 Years of Discrete-Event Simulation and Machine Learning Through a Systematic Review of WSC Proceedings Yanina Döning (FRSF-UTN) and María Julia Blas (INGAR (CONICET-UTN)) Program Track: Simulation Around the World Abstract AbstractOver the past few years, the interest in Machine Learning (ML) has grown due to its ability to improve solutions related to other fields. This paper explores the use of ML techniques in simulation through a systematic literature review of the Winter Simulation Conference proceedings from 2013 to 2023. Our research is focused on the Discrete-Event Simulation (DES) field, centering our attention on the Discrete-Event System Specification (DEVS) formalism as a particular case. The research questions were designed to examine the most frequent contexts, applications, methods, and software tools used in these studies. As a result, this review reports insights into 44 research studies. The main contribution of this paper is related to systematically gathering, analyzing, and discussing the knowledge disseminated in these two areas (ML and DES), aiming to support future research and expand the literature in this field. pdfRoad Traffic Congestion Prediction using Discrete Event Simulation and Regression Machine Learning Models Ivan Kristianto Singgih (University of Surabaya, Indonesia Artificial Intelligence Society); Moses Laksono Singgih (Institut Teknologi Sepuluh Nopember); and Daniel Nathaniel (University of Surabaya) Program Track: Simulation Around the World Abstract AbstractRoad traffic management enters a new era with the automatic collection and analysis of big data. The traffic data could be collected continuously using various IoT sensors (light, video, etc.) and stored in the cloud. The collected data are then analyzed within a short time to make traffic control decisions, e.g., traffic redirection, traffic light duration change, and vehicle route recommendation. This study proposes (1) a traffic simulation considering a road network with several traffic lights and (2) regression machine learning models to understand the behavior of the vehicles based on the real-time characteristics of the traffic. The numerical experiment results show that (1) the best models are OrthogonalMatchingPursuitCV and the HuberRegressor, and (2) the road network behavior is affected by the condition of all intersections rather than only certain intersections or surrounding road segments. pdfSimulation of Decentralized Coordination Strategies for Networked Multi-robot Systems with Emergent Behavior-DEVS Ezequiel Pecker Marcosig (Instituto de Ciencias de la Computación (ICC-CONICET); Facultad de Ingeniería, UBA); Juan Francisco Presenza (INTECIN (Institute of Engineering Sciences and Technology 'Hilario Fernández Long'), CONICET-UBA, and School of Engineering, UBA.); Ignacio Mas (University of San Andrés and CONICET); José Ignacio Alvarez-Hamelin (INTECIN (Institute of Engineering Sciences and Technology 'Hilario Fernández Long'), CONICET-UBA, and School of Engineering, UBA.); Juan Ignacio Giribet (University of San Andrés and CONICET); and Rodrigo Daniel Castro (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)) Program Track: Simulation Around the World Abstract AbstractThe design of distributed control strategies for multi-robot systems (MRS) relies heavily on simulations
to validate algorithms prior to real-world deployment. However, simulating such systems poses significant
challenges due to their dynamic network topologies and scalability requirements, where full inter-robot
communication becomes computationally prohibitive. In this paper, we extend the applications of the
Emergent Behavior DEVS (EB-DEVS) formalism by developing an agent-based model (ABM) to address
key distributed control challenges in networked MRS. The proposed approach supports both direct and
indirect interactions between agents (robots) via event messages and through macroscopic-microscopic states
sharing, respectively. We validate the model using a challenging cooperative target-capturing scenario that
demands dynamic multi-hop communication and robust coordination among agents. This complex use case
highlights the strengths of EB-DEVS in managing asynchronous events while minimizing communication
overhead. The results demonstrate the formalism’s effectiveness in supporting decentralized control and
simulation scalability within a hierarchical micro-macro modeling framework. pdf Simulation Around the WorldDEVS-Based Simulation Frameworks and Tools Session Chair: Rafael Mayo-García (CIEMAT) Devs Simulation of Belbin’s Team Roles for Collaborative Team Dynamics Silvio Vera, Alonso Inostrosa-Psijas, Diego Monsalves, Fabian Riquelme, and Eliezer Zúñiga (Universidad de Valparaíso) and Gabriel Wainer (Carleton University) Program Track: Simulation Around the World Abstract AbstractBelbin’s team role theory identifies nine behavioral roles that, when combined, support effective collaboration. Configuring teams based on these roles is often manual, costly, and inflexible. This article presents an individual-oriented simulation model using the Discrete-Event System Specification (DEVS) formalism to emulate group interactions shaped by Belbin roles. Each team member is modeled as an atomic entity with behavior defined by a combination of two roles. This enables controlled experimentation with different team compositions, interaction timings, and communication sequences. Simulations were conducted using synthetic data, defined under plausible assumptions based on Belbin’s framework. The model enables exploration of how different configurations affect communication flow and task distribution, supporting the identification of team structures that promote balance and efficiency. Results demonstrate the potential of integrating behavioral theories with formal modeling approaches to improve team design. This work offers a flexible and extensible simulation-based method for analyzing and optimizing team dynamics. pdfA DSL-Driven Web Tool for Modeling and Simulation of RDEVS Artifacts Clarisa Espertino (Facultad Regional Santa Fe, Universidad Tecnológica Nacional) and María Julia Blas and Silvio Gonnet (Instituto de Desarrollo y Diseño INGAR, CONICET-UTN) Program Track: Simulation Around the World Abstract AbstractRouted Discrete Event System Specification (RDEVS) is a formalism that extends the Discrete Event System Specification (DEVS) to support the Modeling and Simulation (M&S) of routing processes in discrete event-based scenarios. We present a web-based tool that, following the principles of MDE, combines textual modeling with code generation to reduce development time, ensure consistency with the RDEVS formalism, and promote independence from external software tools. pdfBridging Simulation Formalisms and Embedded Targets: a PowerDEVS-driven IoT/Robotics Workflow for ESP32 Ezequiel Pecker-Marcosig (Instituto de Ciencias de la Computación (ICC-CONICET); Facultad de Ingeniería, Universidad de Buenos Aires) and Sebastián Bocaccio and Rodrigo Daniel Castro (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)) Program Track: Simulation Around the World Abstract AbstractThis work presents a methodology for developing embedded applications in Internet-of-Things (IoT) and robotic systems through Model and Simulation (M\&S)-based design. We introduce adaptations to the PowerDEVS toolkit's abstract simulator to enable embedded execution on resource-constrained platforms, specifically targeting the widely used ESP32 development kit tailored to IoT systems. We present a library of DEVS atomic models designed for simulation-environment interaction, enabling embedded software development through sensor data acquisition and actuator control. To demonstrate the practical utility of the embedded PowerDEVS framework, we evaluate its performance in real-world discrete-event control applications, including a line-follower robot and an electric kettle temperature regulator. These case studies highlight the approach’s versatility and seamless integration in IoT and robotic systems. pdf Simulation Around the WorldApplied Simulation for Sustainable and Efficient Industrial Operations Session Chair: Stewart Robinson (Newcastle University) Simulating the Path to Net Zero in an UK Industrial Cluster Zsofia Baruwa, Kathy Kotiadis, and Virginia Spiegler (University of Kent) and Niki Ansari Dogaheh (-) Program Track: Simulation Around the World Abstract AbstractThe decarbonisation of industrial clusters is central to the UK’s net zero strategy. Located in southeast England, the Kemsley Industrial Cluster (comprising five firms) is the largest industrial cluster in the counties of Kent and Sussex, offering a critical opportunity for coordinated action. This study supports their decarbonisation-focused decision-making through a Monte Carlo simulation model. The model takes hourly energy use as input and simulates the potential impact of emerging decarbonisation technologies including hydrogen, and collaborative energy exchanges such as private wire electricity and shared steam flows. Key model outputs include emissions profiles and the overall energy balance. The model produces insights for six main decarbonisation scenarios over a 25-year horizon (2025–2050). Results suggest the cluster could reach net zero emissions by 2035 if carbon capture is implemented. This study develops a novel Monte Carlo model that captures inter-firm dynamics within industrial clusters, directly informing collaborative decarbonisation investment strategies. pdfResource Planning for On-time Delivery Improvement Using Discrete-Event Simulation: A Case Study in the Packaging Ink Industry Kadek Wahyu Suarta Putra and Niniet Indah Arvitrida (Institut Teknologi Sepuluh Nopember) Program Track: Simulation Around the World Abstract AbstractThis study presents a case of a packaging ink manufacturer based in Surabaya, Indonesia. The production process involves resources, including bowls, field operators, lab technicians, mixers, quality checking equipment, and a high-stacker. The company faces a persistent problem of low on-time delivery performance, which risks customer loss. To address this issue, a discrete-event simulation approach is employed, considering the system’s complexity, process interdependencies, and queuing dynamics. The simulation evaluates several improvement scenarios to identify the best alternative for enhancing on-time delivery while accounting for changes in variable costs. Additionally, analysis for Net Present Value (NPV) is used to assess the financial feasibility of selected solution alternatives. The results indicate that a rearrangement of production resources can significantly improve on-time delivery performance. Although variable costs increase, the selected alternative yields a positive NPV, justifying the investment. This study demonstrates how simulation-based decision-making can support resource planning in complex manufacturing environments pdfEnhancing Operational Efficiency in Mumbai’s Airport Departure Terminal: A Hybrid Simulation Modeling Approach Armin Kashefi and Faris Alwzinani (Brunel University London) and Noaman Malek (Bestway Group) Program Track: Simulation Around the World Abstract AbstractAirports function as complex systems comprising interrelated entities, resources, and processes. Inefficiencies within these systems, particularly long waiting times, contribute to passenger dissatisfaction. This study examines operational improvements at Mumbai International Airport, one of India’s busiest hubs, focusing on identifying bottlenecks and reducing congestion for departing passengers. Discrete event simulation served as the core methodological framework. Through hybrid simulation, AS-IS models were developed to analyze operational processes and evaluate bottlenecks. BPMN was used for conceptual modeling, followed by Simul8 for dynamic modeling. TO-BE system configurations for check-in, security screening and boarding were then remodeled. Simulation experiments were conducted to determine the optimal setup that minimizes queuing time while maximizing passenger throughput. The findings highlight an ideal system configuration that significantly reduces waiting times and overall passenger processing time, resulting in improved operational efficiency. These insights provide data-driven recommendations for optimizing airport processes, ultimately enhancing passenger experience and improving airport performance. pdf Simulation Around the WorldAdvanced Simulation for Transport Efficiency and Infrastructure Development Session Chair: Rafael Mayo-García (CIEMAT) Design Options to Increase the Effiency of Intra-Port Container Transports Ann-Kathrin Lange (Hamburg University of Technology), Anne Kathrina Schwientek (Free Hanseatic City of Bremen), and Carlos Jahn (Hamburg University of Technology) Program Track: Simulation Around the World Abstract AbstractHigh volumes, short buffer times and many players influence the maritime transport chain and intra-port container transport. Congestion at logistics hubs leads to inefficient processes, increased waiting times, costs and emissions. Truck appointment systems are intended to reduce congestion by offering trucking companies time slots for delivering or collecting containers. However, these systems lack flexibility, which limits utilization and reduces efficiency. The study aims to develop strategies to evaluate and improve these systems in a simulation study. The results show that medium-length time slots improve the situation for transparent processes, or longer ones for limited transparency. Overbooking and open access are mutually exclusive, but offer slight advantages individually. Adjustments to opening hours of other nodes are show little benefit. Overall, flexibility options positively influence the proportion of load trips and waiting times. pdfPerformance Evaluation of Typical Data Networks: a Proposal Gabriel Cerqueira Pires (Universty of Campinas) and Leonardo Grando, José Roberto Emiliano Leite, and Edson Luiz Ursini (University of Campinas) Program Track: Simulation Around the World Abstract AbstractThis paper presents a general framework for planning and dimensioning a multi-service Internet Protocol (IP) network supporting voice, video, and data. We describe a methodology for dimensioning link capacity and packet delay while meeting Quality of Service (QoS) requirements under general traffic distributions, using discrete event simulation. Additionally, the paper proposes a procedure to optimize transmission probabilities between nodes, aiming to increase throughput and reduce delay based on the offered network traffic. This procedure applies different transmission probabilities for each node type. Admission control is implicitly incorporated to regulate the number of active streams. A Jackson network is employed to validate the simulation model. Once validated, other probability distributions are explored to assess achievable delay and throughput, and the resulting suboptimal optimization outcomes are presented. pdfLong Range Planning for Electric Vehicle Charging Stations Sizing for São Caetano do Sul´s Master Plan Felipe Cohen, Gabriela Augusto, Kaue Varoli, Leonardo Franco, and Pedro Loureiro (Instituto Mauá de Tecnologia) and Leonardo Chwif (Instituto Mauá de Tecnologia, Simulate Simulation Technology) Program Track: Simulation Around the World Abstract AbstractThis study aimed to conduct discrete-event simulations using Simul8 in key locations within the city of São Caetano do Sul, Brazil (shopping mall; supermarkets; gas stations) to assess the current level of service provided by electric vehicle charging stations. Simulations are also carried out to evaluate future scenarios, considering the projected growth of electric vehicles both nationally and within the city. In partnership with municipal departments, this work is providing valuable insights to the city's master plan, currently under development, which will outline strategic guidelines for urban growth over the next ten years. In addition to assessing current infrastructure performance, the study seeks to determine the optimal number of charging stations required to ensure a satisfactory level of service under different future demand scenarios. pdf Simulation Around the WorldSimulation-Driven Optimization and Automation in Smart Manufacturing Systems Session Chair: Stewart Robinson (Newcastle University) Bridging Expertise and Automation: A Hybrid Approach to Automated Model Generation for Digital Twins of Manufacturing Systems Lekshmi P (Indian Institute of Technology, Goa) and Neha Karanjkar (Indian Institute of Technology Goa) Program Track: Simulation Around the World Abstract AbstractWe consider the problem of Automated Model Generation (AMG) for Digital Twins (DTs) of manufacturing systems, particularly those represented as stochastic Discrete-Event Simulation (DES) models. Unlike fully data-driven approaches, where both model parameters and structure are inferred from event logs, we propose an expert-in-the-loop approach targeting systems whose structure changes only occasionally, with such changes being automatically detected. While machine states and parameters (such as task delays) are continuously inferred from data, the process remains expert-tunable via a GUI-based, guided flow. A natural language description of system structure is translated into readable DES models using LLMs. The model is built as an interconnection of configurable components from a lightweight, open-source library FactorySimPy, with parameters inferred seamlessly from data. We outline the proposed flow, its components, and results from a proof-of-concept implementation, and provide a detailed review of existing AMG approaches, highlighting key differentiating aspects of our framework. pdfDigital Twin Optimization Approach For Flexible Manufacturing System Scheduling Mokhtar Nizar SID-LAKHDAR (ESSA Tlemcen; MELT, University of Tlemcen); Mehdi Souier (University of Tlemcen); and Hichem Haddou-Benderbal (Aix-Marseille University) Program Track: Simulation Around the World Abstract AbstractTo remain competitive in an evolving market, enterprises must adopt modern approaches in their production lines. Flexible Manufacturing Systems (FMS) produce diverse, high-quality products with short processing times. New technologies have transformed manufacturing. Among them, Digital Twin (DT) technology improves decision-making through real-time simulations. Most studies on FMS focus on reducing makespan. This paper proposes a model optimizing the makespan and energy consumption and production costs. It also presents a DT framework with a physical and virtual part connected in real time. The framework includes production data, optimization, scheduling, and learning, etc. Two experiments are conducted. The first uses Simulated Annealing (SA) to minimize makespan. Results show that SA is flexible, finding different schedules with the same makespan. The second applies Archived Multi-Objective Simulated Annealing (AMOSA) to optimize makespan, production cost, and energy consumption. Results show that AMOSA provides better trade-offs between objectives, making it effective for complex FMS. pdfImplementation of a Dobot Magician Robotic Arm in a Didactic Lean Line: a Realistic Simulation-based Approach Aicha Belkebir (Higher school of applied sciences of tlemcen, Algeria); Mehdi Souier (Manufacturing Engineering Laboratory of Tlemcen (MELT), University of Tlemcen); and Hichem Haddou Benderbal (Aix-Marseille University University of Toulon, CNRS) Program Track: Simulation Around the World Abstract AbstractThis work explores the enhancement of the Earmalean line in order to increase accessibility, operational efficiency, and educational impact. FlexSim is used to model system behavior, process interactions, and human-machine collaboration in a dynamic and visual environment. The simulation incorporated realistic process logic, including part flows, workstation interactions, stock management, and operator tasks, allowing the testing of multiple what-if scenarios under controlled conditions. FlexSim’s 3D modeling capabilities and integrated analytics tools enabled the evaluation of key performance indicators such as cycle time, throughput, resource utilization, and work-in-process inventory. Data-driven modeling enabled realistic scenario testing, including automated logistics and human-robot collaboration. The results underline the value of integrating smart technologies into didactic systems and open pathways for future development toward a connected and adaptive learning platform. pdf Simulation Around the WorldAdvanced Simulation Techniques in Industrial Systems Session Chair: Tesfamariam Abuhay (University of Ilinois at Chicago) Optimal Placement of Sensors at Minimal Cost for Causal Observability in Discrete Event Systems Bouchra Mamoun (Higher School in Applied Sciences of Tlemcen (ESSA-Tlemcen)); Fouad Maliki (Higher School in Applied Sciences of Tlemcen (ESSA-Tlemcen); Manufacturing Engineering Laboratory of Tlemcen (MELT), University of Tlemcen); and Fayssal ARICHI (Higher School in Applied Sciences of Tlemcen (ESSA-Tlemcen)) Program Track: Simulation Around the World Abstract AbstractThis study addresses the problem of optimal sensor placement in discrete-event systems modeled by partially observable Petri nets, with the dual objectives of ensuring causal observability and minimizing total installation cost. The system is algebraically reformulated as a descriptor system, which leads to a combinatorial, nonlinear, and non-convex optimization problem. To overcome these difficulties, two metaheuristic approaches, simulated annealing and genetic algorithms, are proposed and evaluated on a complex manufacturing system. The results obtained highlight the effectiveness and scalability of both methods, underscoring their strong potential for industrial applications. pdfImproving Empty Container Depot Layouts: Combining Efficiency and Safety Using Simulation Cristóbal Vera-Carrasco, Sebastián Muñoz-Herrera, and Cristian D. Palma (Universidad del Desarrollo) Program Track: Simulation Around the World Abstract AbstractLayout decisions in empty-container depot affect both efficiency and safety, but prior studies rarely evaluate them together. This work presents an integrated discrete-event simulation framework that quantifies truck turnaround times, lifting equipment travel distances, and potential collision risks. A case study in a Chilean multipurpose terminal shows the value of the proposed framework, revealing critical workload imbalances and traffic bottlenecks, particularly for reefer operations. Statistical analyses show significant performance degradation under high truck arrival frequencies and confirm that traditional efficiency metrics only partially explain safety risks. Managerial insights include layout redesign, targeted equipment allocation, and automated gate operations. The study highlights the necessity of balancing operational efficiency and safety in space-constrained terminals to improve overall performance. pdfSystem Dynamics Simulation of Main Material Inventory Control Policy for Electricity Distribution Network Operations Diajeng Anjarsari Rahmadani (Institut Teknologi Sepuluh Nopember, PT PLN (Persero)) and Niniet Indah Arvitrida (Institut Teknologi Sepuluh Nopember) Program Track: Simulation Around the World Abstract AbstractInventory management needs coordinated efforts among internal and external stakeholders. An electric power distribution service unit faces challenges in the inventory management. Operational constraints include ordering cycle, fluctuating lead times, and the absence of buffer stock must be managed while adhering to mandatory service standards. This study adopts a system dynamics simulation to identify key stakeholders and map their interdependencies. The objects of study are power cable, cubicle, insulator, and lightening arrester. A causal loop diagram (CLD) is employed to visualize and analyze the dynamics of the problem. Inventory system is modeled into ten sub models. It is material request sub model, purchasing order, order receipt, material issued transaction, inventory level, coordination, service level, inventory turnover, budget - total cost, and service duration day. Continuous review policy is integrated in the model as proposed policy and combination with supplier improvements gives best results in 26,12% increase in service level. pdf
Simulation as Digital Twin Track Coordinator - Simulation as Digital Twin: Haobin Li (Centre for Next Generation Logistics, National University of Singapore), Giovanni Lugaresi (KU Leuven), Jie Xu (George Mason University) Input Modeling, Metamodeling, Petri Nets, Python, Validation, Simulation as Digital TwinDigital Twin Calibration and Validation Methods Session Chair: Meryem Mahmoud (Karlsruhe Institute of Technology) Automated Detection in Unstructured Material Streams: Challenges and Solutions Alexander Schlosser (Friedrich-Alexander-Universität Erlangen Nürnberg); Takeru Nemoto (SIEMENS AG); and Jonas Walter, Sebastian Amon, Joerg Franke, and Sebastian Reitelshöfer (Friedrich-Alexander-Universität Erlangen Nürnberg) Program Track: Simulation as Digital Twin Program Tag: Python Abstract AbstractPost-consumer packaging waste continues to rise, intensifying the need for automated sorting to increase recycling efficiency. Lightweight packaging (LWP), with its variable geometries, materials, and occlusions, remains especially difficult for conventional vision systems. Integrating You Only Look Once (YOLO) instance segmentation into robotic simulation platforms enables robust real-time detection. Experiments show that synthetic datasets yield consistently high segmentation accuracy, whereas real-world performance fluctuates. Crucially, increasing model size or resolution does not guarantee improvement; task-specific tuning and system-level integration are more effective. Simulation frameworks combining Unity, Robot Operating System 2 (ROS2), and MoveIt2 provide realistic evaluation and optimization. These findings demonstrate that AI-based segmentation and digital twins can deliver scalable, adaptive, and self-optimizing sorting systems, offering a practical pathway to sustainable material recovery and circular economy implementation. pdfPreserving Dependencies in Partitioned Digital Twin Models for Enabling Modular Validation Ashkan Zare (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Validation Abstract AbstractLeveraging Digital Twins, as near real-time replicas of physical systems, can help identify inefficiencies and optimize production in manufacturing systems. Digital Twins’ effectiveness, however, relies on continuous validation of the underlying models to ensure accuracy and reliability, which is particularly challenging for complex, multi-component systems where different components evolve at varying rates. Modular validation mitigates this challenge by decomposing models into smaller sub-models, allowing for tailored validation strategies. A key difficulty in this approach is preserving the interactions and dependencies among the sub-models while validating them individually; isolated validation may yield individually valid sub-models while failing to ensure overall model consistency. To address this, we build on our previously proposed modular validation framework and introduce an approach that enables sub-model validation while maintaining interdependencies. By ensuring that the validation process reflects these dependencies, our method enhances the effectiveness of Digital Twins in dynamic manufacturing environments. pdfDynamic Calibration of Digital Twin via Stochastic Simulation: A Wind Energy Case Study Best Contributed Theoretical Paper - Finalist, Best Contributed Applied Paper - Finalist Yongseok Jeon and Sara Shashaani (North Carolina State University), Eunshin Byon (University of Michigan), and Pranav Jain (North Carolina State University) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Metamodeling Abstract AbstractThis study presents an approach to dynamically calibrate a digital twin to support decision-making in systems operating under uncertainty. The framework integrates uncertainty by turning a physics-based model into a stochastic simulation, where independent variables that represent environmental conditions may be nonstationary whereas target variables are conditionally stationary. Calibration itself is formulated as a simulation optimization problem that we solve using a root-finding strategy. As a case study, we apply the framework to the prediction of short-term power deficit, known as the wake effect, in wind farms using real-world data and demonstrate the robustness of the proposed framework. Besides advancing the digital twin research, the presented methodology is expected to impact wind farm wake steering strategy by enabling accurate short-term wake effect prediction. pdf AnyLogic, Data Driven, DEVS, Open Source, Python, Simulation as Digital TwinDigital Twin Frameworks and Software Architectures Session Chair: Giovanni Lugaresi (KU Leuven) Firescore: a Framework for Incident Risk Evaluation, Simulation, Coverage Optimization and Relocation Experiments Guido A.G. Legemaate (Fire Department Amsterdam-Amstelland, Safety Region Amsterdam-Amstelland); Joep van den Bogaert (Jheronimus Academy of Data Science,); Rob D. van der Mei (Centrum Wiskunde & Informatica); and Sandjai Bhulai (Vrije Universiteit Amsterdam) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Python Abstract AbstractThis paper introduces fireSCore, an open source framework for incident risk evaluation, simulation, coverage optimization, and relocation experiments. As a digital twin of operational fire department logistics, its visualization frontend provides a live view of current coverage for the most common fire department units. Manually changing a unit status allows for a view into future coverage as it triggers an immediate recalculation of prognosed response times and coverage using the Open Source Routing Machine. The backend provides the controller and model, and implements various algorithms, e.g. a relocation algorithm that optimizes coverage during major incidents. The data broker handles communication with data sources and provides data for the front- and backend. An optional simulator adds an environment in which various scenarios, models and algorithms can be tested and aims to drive current and future organizational developments within the Dutch national fire service. pdfEvaluating Third-Party Impacts In Urban Air Mobility Community Integration: A Digital Twin Approach Alexander Ireland and Chun Wang (Concordia University) Program Track: Simulation as Digital Twin Program Tag: AnyLogic Abstract AbstractUrban Air Mobility presents unique community impacts. UAM vehicles mostly operate above populated areas while traditional aviation typically operates point-to-point between populated areas, so impacts on third parties are increasingly important. This paper explores the approach of using a Digital Twin to optimize EVTOL vehicle flight planning by using accurate live population data to minimize third-party safety and privacy impacts. Live population density data is translated into an equivalent agent-based simulation and used to calculate safety and privacy metrics. A Montreal vertiport case study compares a baseline EVTOL approach suggested by the FAA to 128 alternate approach scenarios to find generalizations and prove the usefulness of Digital Twin technology for UAM operational optimization. It was found that flight path characteristics suggested by regulators are not necessarily optimal when considering third-party impacts, and by extension that Digital Twins are promising technology that will play a significant role in making UAM safer. pdfTowards a DEVS-Based Simulation Engine for Digital Twin Applications Arnis Lektauers (Riga Technical University) Program Track: Simulation as Digital Twin Program Tags: DEVS, Open Source Abstract AbstractDigital twins (DT) are increasingly being adopted to improve system monitoring, prediction, and decision making in various domains. Although simulation plays a central role in many DT implementations, a lack of formal modeling foundations often leads to ad hoc and non-scalable solutions. This paper proposes a simulation engine for DT applications based on the Discrete Event System Specification (DEVS) formalism. DEVS provides a robust, modular, and hierarchical modeling framework suitable for modeling the structure and behavior of complex cyber-physical systems. A key contribution is the integration of the Parallel DEVS for Multicomponent Systems (multiPDEVS) formalism with X-Machines to support state and memory separation for simulation models with the goal of improving model scalability and reusability, as well as providing a basis for integration with DTs. The paper presents the architectural design of the engine, highlights its main functional components, and demonstrates its capabilities using a preliminary use case. pdf Complex Systems, Conceptual Modeling, Cyber-Physical Systems, Python, Siemens Tecnomatix Plant Simulation, Simulation as Digital TwinApplications of Digital Twins Session Chair: Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Digital Twin to Mitigate Adverse Addictive Gambling Behavior Felisa Vazquez-Abad and Jason Young (Hunter College CUNY) and Silvano A Bernabel (Graduate Center CUNY) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis work develops a simulation engine to create a digital twin that will monitor a gambler’s betting behavior when playing games. The digital twin is designed to perform simulations to compare outcomes of different betting strategies, under various assumptions on the psychological profile of the player. With these simulations it then produces recommendations to the player aimed at mitigating adverse outcomes. Our work focuses on efficient simulation and the creation of the corresponding GUI that will become the interface between the player and the digital twin. pdfEVIMAS - Digital Twin-Based Electric Vehicle Infrastructure Modeling And Analytics System Aparna Kishore, Kazi Ashik Islam, and Madhav Marathe (University of Virginia, Biocomplexity Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Python Abstract AbstractThe growing shift to electric vehicles (EVs) presents significant challenges due to the complexities in spatial, temporal, and behavioral aspects of adoption and infrastructure development. To address these challenges, we present the EV Infrastructure Modeling and Analytics System (EVIMAS), a modular and extensible software system built using microservices principles. The system comprises three loosely coupled components: (i) a data processing pipeline that constructs a static digital model using diverse inputs, (ii) a modeling and simulation pipeline for simulating dynamic, multi-layered interactions, and (iii) an analytics pipeline that supports task execution and the analysis of results. We demonstrate the utility of the EVIMAS via three case studies. Our studies show that such analysis can be done efficiently under varying constraints and objectives, including geographic regions, analytical goals, and input configurations. EVIMAS supports fine-grained, agent-based EV simulations, facilitating the integration of new components, data, and models for EV infrastructure development. pdfDigital Twins for Optimizing the Transition from Job-shop to Mass Production: Insights from Marine Pump Manufacturing in Scandinavia Sebastian Pihl (University of Exeter, FRAMO); Ahmad Attar and Martino Luis (University of Exeter); and Øystein Haugen (FRAMO) Program Track: Simulation as Digital Twin Program Tag: Siemens Tecnomatix Plant Simulation Abstract AbstractMarine pumping systems are among the most essential equipment for maritime operations. Typically, this type of equipment is manufactured to order in small quantities, thereby increasing the cost and time to market. The rising demand for this equipment has made the transition to mass production even more attractive for key players, which can potentially bring about a significantly high competency for these companies. However, to maintain the competitive advantages of such a transition, it is crucial to make optimal decisions, taking into account all influential aspects. This study, assisted by experts from pioneering companies in this industry, proposes an integrated approach that applies redundancy analysis, inventory policy calibration, and GA-based optimization to address these challenges—all built upon a DES-based digital twin. Applying our framework to the studied case drastically reduced the cycle time from more
than a week to about one day, raising the annual capacity over the projected demand. pdf Input Modeling, Matlab, Python, Simulation as Digital TwinAdaptive Digital Twins Session Chair: Joachim Hunker (Technische Universität Dortmund, Fraunhofer Institute for Software and Systems Engineering ISST) Real-time Image Processing and Emulation for Intelligent Warehouse Automation Sumant Joshi, Saurabh Lulekar, Tarciana Almeida, Abhineet Mittal, and Ganesh Nanaware (Amazon) Program Track: Simulation as Digital Twin Program Tags: Input Modeling, Matlab Abstract AbstractMaterial flow simulation and emulation are essential tools used in warehouse automation design and commissioning, to create a digital twin and validate equipment control logic. The current emulation platforms lack an internal computer vision (CV) toolkit which poses a challenge for emulating vision-based control system behavior which requires real-time image processing capability. This paper addresses this gap by proposing an innovative framework that utilizes a bridge between Emulate3D and MATLAB to establish real-time bidirectional communication to emulate vision-based control systems. The integration enables transfer of visual data from Emulate3D to MATLAB, which provides CV toolkit to analyze vision data and communicate controls decisions back to Emulate3D. We evaluated this approach to develop a small-footprint package singulator (SFPS) and the results show that SFPS achieved target throughput with 45% improvement in singulation accuracy over conventional singulators with 64% less footprint and eliminating the need for gapper equipment required with conventional singulators. pdfVirtual Commissioning of AI Vision Systems for Human-robot Collaboration Using Digital Twins and Deep Learning Urfi Khan (Oakland University), Adnan Khan (Institute of Innovation in Technology and Management), and Ali Ahmad Malik (Oakland University) Program Track: Simulation as Digital Twin Program Tag: Python Abstract AbstractVirtual commissioning is the process of validating the design and control logic of a physical system prior to its physical deployment. Machine vision systems are an integral part of automated systems, particularly in perception-driven tasks; however, the complexity of accurately modeling these systems and their interaction with dynamic environments makes their verification in virtual settings a significant challenge. This paper presents an approach for the virtual commissioning of AI-based vision systems which can be useful to evaluate the safety and reliability of human-robot collaborative cells using virtual cameras before physical deployment. A digital twin of a collaborative workcell was developed in Tecnomatix Process Simulate, including a virtual camera that generated synthetic image data. A deep learning model was trained on this synthetic data and subsequently validated using real-world data from physical cameras in an actual human-robot collaborative environment. pdf AnyLogic, Complex Systems, Petri Nets, Supply Chain, Simulation as Digital TwinDigital Twin Model Generation Methods Session Chair: Christian Schwede (University of Applied Sciences and Arts Bielefeld, Fraunhofer Institute for Software and Systems Engineering ISST) Modular Digital Twins: The Foundation for the Factory of the Future Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST); Lasse Jurgeleit (TU Dortmund University); and Joachim Hunker (Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: AnyLogic, Supply Chain Abstract AbstractCompanies face stiff challenges regarding their value chains' circular and digital transformation. Digital Twins are a valuable and powerful tool to ease such transformation. Yet, Digital Twins are not just one virtualized model but several parts with different functions. This paper analyzes Digital Twins’ frameworks and reference models on an architectural level. We derive a modular framework displaying best practices based on empirical data from particular use cases. Hereby, we concentrate on discrete manufacturing processes to leverage benefits for the factory of the future. According to a design science cycle, we also demonstrate and evaluate the modular framework in a real-world application in an assembly line. The study provides an overview of the state-of-the-art for Digital Twin frameworks and shows ways for easy implementation and avenues for further development. As a synthesis of particular architectures, the modular approach offers a novel and thoroughly generizable blueprint for Digital Twins. pdfMulti-flow Process Mining as an Enabler for Comprehensive Digital Twins of Manufacturing Systems Atieh Khodadadi and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Petri Nets Abstract AbstractProcess Mining (PM) has proven useful for extracting Digital Twin (DT) simulation models for manufacturing systems. PM is a family of approaches designed to capture temporal process flows by analyzing event logs that contain time-stamped records of relevant events. With the widespread availability of sensors in modern manufacturing systems, events can be tracked across multiple process dimensions beyond time, enabling a more comprehensive performance analysis. Some of these dimensions include energy and waste. By integrating and treating these dimensions analogously to time, we enable the use of PM to extract process flows along multiple dimensions, an approach we refer to as multi-flow PM. The resulting models that capture multiple dimensions are ultimately combined to enable comprehensive DTs that support multi-objective decision-making. In this paper, we present our approach to generating these multidimensional discrete-event models and, through an illustrative case study, demonstrate how they can be utilized for multi-objective decision support. pdfUsing Deep Learning to Improve Simulation-based Decision Making by Process Lead Time Predictions Christian Schwede (University of Applied Sciences and Arts Bielefeld, Fraunhofer Institute for Software and Systems Engineering ISST) and Adrian Freiter (Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Abstract AbstractBased on digital twins, simulation is often used in companies for the regular planning and control of operational processes. However, when modeling the lead times of individual processes, mean values of process times measured in advance are often used, which can lead to errors in the planning. This work demonstrates how models of these time distributions can be created and updated within a digital twin framework using machine learning. The lead times are used in simulation to create schedules. The approach is validated using the online order workforce scheduling of a medium-sized company that assembles individual packages of office materials for its customers. pdf Simulation as Digital TwinDigital Twins and Manufacturing Session Chair: Guodong Shao (National Institute of Standards and Technology) Data Requirements for a Digital Twin of a CNC Machine Tool Deogratias Kibira (National Institute of Standards and Technology; University of Maryland, College Park) and Guodong Shao (National Institute of Standards and Technology) Program Track: Simulation as Digital Twin Abstract AbstractDigital twins can enable the intelligent operation of computer numerical control (CNC) machine tools. The data for the digital twin are collected from the machine controller, sensors, or other Internet of Things (IoT) devices. Creating a valid digital twin for a specific purpose requires identifying and specifying the right types and quality of data. However, challenges exist, such as unlabeled data and a lack of clarity on the sufficiency of data required to build a digital twin for a specific purpose. This paper discusses the data types, sources, and acquisition methods for creating digital twins for machine tools with different capabilities. Depending on the purpose, any digital twin can be categorized as descriptive, diagnostic, predictive, prescriptive, or autonomous. Data requirements for each of these categories are discussed. This paper can be used as a guide for developing and validating different types of digital twins for CNC machine tools. pdfEnergy and Accuracy Trade-offs in Digital Twin Modeling: a Comparitive Study of an Autoclave System Stijn Bellis, Joost Mertens, and Joachim Denil (Universiteit Antwerpen) Program Track: Simulation as Digital Twin Abstract AbstractDigital twinning is becoming increasingly prevalent and is creating significant value for organizations. When creating a digital twin, there are many modeling formalisms and levels of detail to choose from. However, designing, running, and deploying these models all consume energy.
This paper examines the trade-offs between energy consumption and accuracy across different modeling formalisms, using an autoclave as a case study. We created a high-fidelity model of an autoclave, and from this model, several approximations were developed, including a 2D model and neural networks.
The energy consumption of these models—as well as energy-intensive steps such as training the neural network—was measured and compared. This provides insights into the trade-offs between energy usage and accuracy for the selected modeling formalisms. pdfVirtual Robot Controllers for Enhancing Reliability in Industrial Robot Commissioning Ali Ahmad Malik (Oakland University) Program Track: Simulation as Digital Twin Abstract AbstractThis paper addresses the challenges of inaccuracy in digital twinning and offline programming for industrial robotics. Simulations are conventionally used to design and verify industrial robot systems. These virtual environments are often used as Model-in-the-Loop (MiL) simulations; however, they tend to lack accuracy in joint positioning, trajectory planning, and cycle time estimation. This limitation becomes problematic in assembly operations, where multiple coordinated tasks must be executed accurately and repeatedly. The challenge intensifies further in multi-robot cells, which require coordinated actions. To address this challenge, this article proposes a Software-in-the-Loop (SiL) approach that integrates external Virtual Robot Controllers with a simulation-based digital twin, enabling robot development within a Software-Defined Manufacturing System architecture. A framework for building an SDMS enables precise modeling of robotic systems. The results indicate that the proposed approach is effective for virtual commissioning during new system development and can be extended to operational stages for system reconfiguration. pdf Complex Systems, Conceptual Modeling, Cyber-Physical Systems, Data Driven, Python, Supply Chain, Simulation as Digital TwinDigital Twin Implementation Session Chair: Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) Review and Classification of Challenges in Digital Twin Implementation for Simulation-Based Industrial Applications Alexander Wuttke (TU Dortmund University), Bhakti Stephan Onggo (University of Southampton), and Markus Rabe (TU Dortmund University) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems Abstract AbstractDigital Twins (DTs) play an increasingly important role in connecting physical objects to their digital counterparts, with simulation playing a key role in deriving valuable insights. Despite their potential, DT implementation remains complex and adoption in industrial operations is limited. This paper investigates the challenges of DT implementation in simulation-based industrial applications through systematically reviewing 124 publications from 2021 to 2024. The findings reveal that while nearly half of the publications tested prototypes, most are limited to laboratory settings and lack critical features such as cybersecurity or real-time capabilities. Discrete Event Simulation and numerical simulation emerge as the dominantly utilized simulation techniques in DTs. From the analysis, 33 challenges are identified and the classification of them into nine dimensions is proposed. Finally, further research opportunities are outlined. pdfOptimization of Operations in Solid Bulk Port Terminals using Digital Twin JACKELINE DEL CARMEN HUACCHA NEYRA and Lorrany Cristina da Silva (GENOA), João Ferreira Netto (University of Sao Paulo), and Afonso Celso Medina (GENOA) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Python Abstract AbstractThis article presents the development of a Digital Twin (DT)-based tool for optimizing scheduling in solid bulk export port terminals. The approach integrates agent-based simulation with the Ant Colony System (ACS) metaheuristic to efficiently plan railway unloading, stockyard storage, and maritime shipping. The model interacts with operational data, anticipating issues and aiding decision-making. Validation was performed using real data from a port terminal in Brazil, yielding compatible results and reducing port stay duration. Tests were based on a Baseline Scenario, aligned with a mineral export terminal, for ACS parameter calibration, along with three additional scenarios: direct shipment, preventive maintenance, and a simultaneous route from stockyard to ships. The study highlights DT’s potential to modernize port operations, offering practical support in large-scale logistics environments. pdfAnalyzing Implementation for Digital Twins: Implications for a Process Model Annika Hesse (TU Dortmund University) and Hendrik van der Valk (TU Dortmund University, Fraunhofer Institute for Software and Systems Engineering ISST) Program Track: Simulation as Digital Twin Program Tags: Conceptual Modeling, Data Driven, Supply Chain Abstract AbstractFor many companies, digital transformation is an important lever for adapting their work and business processes to constant change, keeping them up-to-date and reactive to changes in the global market. Digital twins are seen as a promising means of holistically transforming production systems and value chains, but despite their potential, there has been a lack of standardized implementation processes, often resulting in efficiency losses. Therefore, this paper aims to empirically identify process models for implementing digital twins through a structured literature review and derive implications for a standardized, widely applicable process model. The literature review is based on vom Brocke’s methodology and focuses on scientific articles from the last years. Based on 211 identified publications, relevant papers were analyzed after applying defined exclusion criteria. The results provide fundamental insights into currently used process models and open perspectives for developing a standardized implementation framework for digital twins. pdf Simulation as Digital TwinPanel: The Present and Evolution of Twinning: Rethinking the Multifaceted Representations of Complex Systems Session Chair: Giovanni Lugaresi (KU Leuven) The Present and Evolution of Twinning: Rethinking the Multifaceted Representations of Complex Systems Andrea D'Ambrogio (University of Rome Tor Vergata); Edward Hua (The MITRE Corporation); Giovanni Lugaresi (KU Leuven); Guodong Shao (National Institute of Standards and Technology); Mamadou Traoré (University of Bordeaux); and Hans Vangheluwe (University of Antwerp, Flanders Make) Program Track: Simulation as Digital Twin Abstract AbstractThe concept of twinning has gained significant traction across industry and academia, largely driven by the rise of digital twins (DTs). DTs have become ubiquitous in fields such as manufacturing, healthcare, and urban planning, with applications tailored to specific domains—from physical building replicas for design and fundraising to digital factory models for real-time operations. While twinning research has focused primarily on the “digital” aspects of a DT through architectures, frameworks, and technical implementations, this panel examines the “twin” part of a DT, emphasizing the modeling process from a system to its representation, which is at the heart of twinning. The discussion focuses on fidelity levels, validity frames, and terminological nuances. With examples, the panel emphasizes the need for formal modeling foundations, scalable workflows, and interoperability to support twinning systems. The aim is to explore future research directions and novel applications of twinning. pdf Cyber-Physical Systems, Data Driven, Emergent Behavior, Metamodeling, Process Mining, Python, System Dynamics, Simulation as Digital TwinDigital Twins for Sustainable Business Processes Session Chair: Bhakti Stephan Onggo (University of Southampton, CORMSIS) Data-driven Digital Twin for the Predictive Maintenance of Business Processes Paolo Bocciarelli and Andrea D'Ambrogio (University of Rome Tor Vergata) Program Track: Simulation as Digital Twin Program Tags: Data Driven, Metamodeling, Process Mining Abstract AbstractThis paper presents a data-driven framework for the predictive maintenance of Business Processes based on the Digital Twin paradigm. The proposed approach integrates process mining techniques and a low-code develop approach to build reliability-aware simulation models from systems logs. These models are used to automatically generate executable DTs capable of predicting resource failures and estimating the Remaining Useful Life (RUL) of system components. The predictions are then exploited to trigger preventive actions or automated reconfigurations. The framework is implemented using the PyBPMN/eBPMN framework and evaluated on a manufacturing case study. Results show that the DT enables timely interventions, minimizes system downtimes, and ensures process continuity. pdfReview of Digital Technologies for the Circular Economy and the Role of Simulation Julia Kunert, Alexander Wuttke, and Hendrik van der Valk (TU Dortmund University) Program Track: Simulation as Digital Twin Program Tag: Cyber-Physical Systems Abstract AbstractThe Circular Economy (CE) is essential for achieving sustainability, with digital technologies serving as key enablers for its adoption. However, many businesses lack knowledge about these technologies and their applications. This paper conducts a structured literature review (SLR) to identify the digital technologies proposed in recent literature for CE, their functions, and current real-world use cases. Special attention is given to simulation, which is considered a valuable digital technology for advancing CE. The analysis identifies the digital technologies Artificial Intelligence, the Internet of Things, Blockchain, simulation, cyber-physical systems, data analytics, Digital Twins, robotics, and Extended Reality. They are used for waste sorting and production automation, disassembly assistance, demand analysis, data traceability, energy and resource monitoring, environmental impact assessment, product design improvement, condition monitoring, predictive maintenance, process improvement, product design assessment, and immersive training. We discuss the findings in detail and suggest paths for further research. pdfA Digital Twin of Water Network for Exploring Sustainable Water Management Strategies Souvik Barat, Abhishek Yadav, and Vinay Kulkarni (Tata Consultancy Services Ltd); Gurudas Nulkar and Soomrit Chattopadhyay (Gokhale Institute of Politics and Economics, Pune); and Ashwini Keskar (Pune Knowledge Cluster, Pune) Program Track: Simulation as Digital Twin Program Tags: Emergent Behavior, Python, System Dynamics Abstract AbstractEfficient water management is an increasingly critical challenge for policymakers tasked with ensuring reliable water availability for agriculture, industry and domestic use while mitigating flood risks during monsoon seasons. This challenge is especially pronounced in regions where water networks rely primarily on rain-fed systems. Managing such water ecosystem is complex due to inherent constraints in water source, storage and flow, environmental uncertainties such as variable rainfall and evaporation, and increasing need for urbanization, industrial expansion and equity on interstate water sharing. In this study, we present a stock-and-flow-based simulatable digital twin designed to accurately represent the dynamics of a raindependent water network comprising dams, rivers and associated environmental and usage factors. The model supports scenario-based simulation and the evaluation of mitigation policies to enable evidencebased decision-making. We demonstrate the usefulness of our approach using a real water body network from western India that covers more than 300 km heterogeneous landscape. pdf Complex Systems, Cyber-Physical Systems, Data Driven, Petri Nets, Process Mining, Python, Simulation as Digital TwinData-Driven Modeling for Digital Twins Session Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) A Hybrid Simulation Modeling within a Digital Twin Framework of AGV-Drone Systems Rupesh Bade Shrestha and Konstantinos Mykoniatis (Auburn University) Program Track: Simulation as Digital Twin Abstract AbstractIntegrating Autonomous Guided Vehicles (AGVs) and multi-copter drones, or AGV-drone systems, can potentially enhance efficiency and manipulation in intralogistics operations. While AGVs provide reliable, safe, and energy-efficient ground-based movement, drones offer rapid aerial mobility, making their collaboration promising. However, implementing this system is challenging as it involves collaborating with indoor drones. In particular, the testing phase can be unsafe and expensive because of the high probability of crashes due to the lack of technological advancement in this domain. Primarily, the communication protocols become complicated once the system scales. A digital twin (DT) methodology can help mitigate these risks and improve scalability. This study presents the simulation aspects of a work-in-progress DT for the AGV-drone system. Here, as a work-in-progress DT, the hardware-in-the-loop feature intends to help communicate within the system and build a DT testing environment. pdfExplainability in Digital Twins: Overview and Challenges Meryem Mahmoud (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, The Maersk Mc-Kinney Moller Institute) Program Track: Simulation as Digital Twin Program Tags: Complex Systems, Cyber-Physical Systems, Data Driven Abstract AbstractDigital Twins are increasingly being adopted across industries to support decision-making, optimization, and real-time monitoring. As these systems and, correspondingly, the underlying models of their corresponding Digital Twins, grow in complexity, there is a need to enhance explainability at several points in the Digital Twins. This is especially true for safety-critical systems and applications that require Human-in-the-Loop interactions. Ensuring explainability in both the underlying simulation models and the related decision-support mechanisms is key to trust, adoption, and informed decision-making. While explainability has been extensively explored in the context of machine learning models, its role in simulation-based Digital Twins remains less examined. In this paper, we review the current state of the art on explainability in simulation-based Digital Twins, highlighting key challenges, existing approaches, and open research questions. Our goal is to establish a foundation for future research and development, enabling more transparent, trustworthy, and effective Digital Twins. pdfIntegrating Expert Trustworthiness into Digital Twin Models Extracted from Expert Knowledge and Internet of Things Data: A Case Study in Reliability Michelle Jungmann (Karlsruhe Institute of Technology) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Program Track: Simulation as Digital Twin Program Tags: Petri Nets, Process Mining, Python Abstract AbstractThe extraction of Digital Twin models from both expert knowledge and Internet of Things data remains an underexplored area, with existing approaches typically being highly customized. Expert knowledge, provided by human experts, is influenced by individual experience, contextual understanding and domain-specific knowledge, leading to varying levels of uncertainty and trustworthiness. In this paper, we address the identified research gap by extending our previous work and introducing a novel approach that models and integrates expert trustworthiness into the extraction of what we term data-knowledge fused Digital Twin models. Key features of the approach are: quantifications of expert trustworthiness and algorithms for selecting and integrating knowledge into model extractions based on trustworthiness. We demonstrate our approach for quantifying and incorporating trustworthiness levels in a reliability modeling case study. pdf
Track Coordinator - Simulation in Education: Omar Ashour (Penn State University), Ashkan Negahban (Pennsylvania State University) Simulation in EducationSimulation-Based Teaching and Learning Session Chair: Michael E. Kuhl (Rochester Institute of Technology) Problem-Based Learning via Immersive Simulations: Effectiveness and Repository of Sample Modules Ashkan Negahban (Penn State), Sabahattin Gokhan Ozden (Penn State Abington), and Omar Ashour (Penn State Behrend) Abstract AbstractTraditional teaching methods often place learners in a decontextualized space due to students' lack of access to real-world facilities to gain hands-on experiential learning. This paper discusses integrating problem-based learning with immersive simulated environments, where the simulation serves as the context by mimicking a real-world system. This allows students to perform virtual site visits of the simulated system instead of visiting and collecting data from a real facility. Supporting pedagogical theories for the proposed immersive simulation-based learning (ISBL) approach are discussed. A free online repository of ISBL modules is shared and a sample module is presented. The paper also provides a review of educational research studies on the effectiveness of ISBL in terms of students' learning outcomes. The paper is intended for simulation educators who are interested in adopting immersive simulation-based learning in their teaching by reusing/re-purposing the models developed as part of their simulation projects for educational purposes. pdfSimulate to Elevate: An EML Approach to Teaching Facilities Design Michael E. Kuhl and Anthony DiVasta (Rochester Institute of Technology) Abstract AbstractSimulation can be an effective tool to convey concepts and promote comprehension in courses at all levels of education. In this paper, we present an entrepreneurial mindset learning (EML) activity that utilizes discrete-event simulation to elevate the depth of understanding of manufacturing systems and layout design in a facilities planning course. This active learning module compels students to simultaneously consider manufacturing activities, capacity, and layout design using a dynamic simulation environment. We demonstrate the use of robust, data-driven, simulation objects that can be manipulated by students to meet design objectives. This allows students to focus on the facilities planning learning outcomes without the need for in-depth simulation modeling knowledge or programming skills. We present the results and lessons learned from implementing the simulation activity in a facilities planning course. pdf
Track Coordinator - Simulation in Space: Anastasia Anagnostou (Brunel University London), Elie Azar (Carleton University), Maziar Ghorbani (Brunel University London) Simulation in SpaceSimulation in Space Keynote Session Chair: Simon J. E. Taylor (Brunel University London) The Unique Challenges Associated with Building Useful Space-Based Simulations Michael R. McFarlane (NASA Johnson Space Center) Program Track: Simulation in Space Abstract AbstractThe National Aeronautics and Space Administration (NASA) has faced incredible challenges throughout its history associated with flying humans in space safely and efficiently. NASA has relied heavily on modeling and simulation products to prepare for these missions and the challenges that must be overcome accordingly. However, building accurate models and simulations of complex vehicles in outer space, on the Moon, or on other astronomical bodies is especially challenging because of the uniqueness of those environments, which includes reduced gravity, intense and immediate temperature shifts, challenging radiation levels, high-contrast lighting challenges, and several other factors. Adding to the difficulty is the fact that, in general, space missions must be fully successful the first time they are flown. Unlike many industries that can afford to build and test numerous concepts and test articles through a series of successes and failures, space missions are simply too expensive and too hazardous to fail, especially human missions. “Failure is not an option” is an oft-repeated slogan associated with human space missions, and for good reason. As a result of these challenges, NASA has invested significantly in building modular, reusable simulation products that have been shared across multiple programs, thereby providing extensive use-history and reusable validation data, which ensures high quality while keeping costs down. These tools include a core simulation environment (Trick), an orbital dynamics package (JEOD), a multi-body dynamics package (MBDyn), and countless other tool suites designed to reduce the workload associated with simulating new vehicles in new space-based environments. Many of these tool suites have been released as open-source projects, others have not for a variety of reasons. NASA has also relied heavily on international standards where they exist, including High Level Architecture (HLA), an IEEE standard, and SpaceFOM, a SISO standard. These standards have ensured that we can build interoperability into our simulations without requiring our partners (domestic or international) to all use the same suite of simulation tools. We also seek out commercial simulation solutions wherever possible. There are excellent commercial visualization tools available, but commercial tools that model the complexities of space to the extent needed to ensure mission success are fairly limited. This presentation will explore some of the history of modeling and simulation at NASA, how those capabilities have been used to ensure mission success, how human space flight has evolved in its development and use of these simulation capabilities, future challenges that will almost certainly rely heavily on the current suite of simulation tools, and new innovations that will hopefully come to pass in the future. pdf Conceptual Modeling, Distributed, Java, Simulation in SpaceLunar Mission Simulations Session Chair: Panayiotis Michael (National Technical University of Athens) Leveraging International Collaboration for Interactive Lunar Simulations: An Educational Experience From See 2025 Kaique Govani, Andrea Lucia Braga, José Lucas Fogaça Aguiar, Giulia Oliveira, Andressa Braga, Rafael Henrique Ramos, Fabricio Torquato Leite, and Patrick Augusto Pinheiro Silva (FACENS) Program Track: Simulation in Space Program Tags: Conceptual Modeling, Distributed, Java Abstract AbstractThis paper presents an educational experience from the Simulation Exploration Experience (SEE) 2025, focusing on leveraging international collaboration to develop interactive lunar simulations. Specifically, the FACENS team created two interoperable simulation federates, a Lunar Cable Car system and an Astronaut system, using Java, Blender and the SEE Starter Kit Framework (SKF). Putting emphasis on the educational and collaborative aspects of SEE, our primary objectives included developing robust real-time interactions with international teams, improving simulation visuals, and improving astronaut behavior and logic using optimized path‑finding algorithms. Seamless interoperability was demonstrated with federates developed by Brunel University and Florida Polytechnic University. Our experiences and lessons learned provide valuable insights for future teams engaged in distributed simulation development and international collaborative projects in the space exploration domain. pdfCollaborative Development of a Distributed Lunar Mission Simulation: A Multi-Team Demonstration within Simulation Exploration Experience (SEE 2025) Maziar Ghorbani, Anastasia Anagnostou, Nura T. Abubakar, Hridyanshu Aatreya, April Miller, and Simon J. E. Taylor (Brunel University London) Program Track: Simulation in Space Abstract AbstractThe Simulation Exploration Experience (SEE) is an international industry–academic initiative that promotes the use of distributed simulation for space exploration. Each year, multiple university teams participate by developing simulations based on lunar mission scenarios. In SEE 2025, Brunel University of London collaborated with three other institutions to create a real-time, distributed simulation of a conceptual lunar mission. Using High-Level Architecture (HLA) standards and NASA’s Distributed Observer Network (DON), five federates were developed to simulate a landing procedure, astronaut transport between facilities, inter-federate communications, and a laboratory tour. This abstract outlines the collaborative development effort, the roles of the participating universities, and the interoperable interactions between their systems. pdfA Data-Stream-Ring Architecture for High-Bandwidth, Real-Time Health Services on the Moon Surface Panayiotis Michael and Panayiotis Tsanakas (National Technical University of Athens) Program Track: Simulation in Space Abstract AbstractA Data-Stream-Ring architecture for high-Bandwidth, real-time health services on the moon surface is proposed. The architecture connects in a ring, lunar surface habitats which include control towers, data center facilities and Cardiac Intensive Care Units. The modular lunar surface habitats simulate earth airport facilities on the lunar surface (Lunarports) providing services in the domains of Health, Habitation, Energy, Mobility, Logistics, Power and In-Situ Data Stream Processing. In this paper we’ll focus on the provision of Telecardiology Services which monitor and diagnose cardiac issues, in order to pro-actively guide operations teams to rescue humans at risk. Our research is based on data stream processing services at large scale (Internet-of-Space) by adapting the technology developed by our team at UCLA (Hoffman cluster) and at ARIS HPC infrastructure of Greece (GRNET), capable to monitor millions of Internet-of-Things in real-time (478 million IoT with latency on the order of milliseconds). pdf
Track Coordinator - Simulation Optimization: David J. Eckman (Texas A&M University), Siyang Gao (City University of Hong Kong), Yuwei Zhou (University of Chicago) Data Driven, Metamodeling, Ranking and Selection, Simulation OptimizationSequential Sampling Session Chair: Zaile Li (INSEAD) Multi-agent Multi-armed Bandit with Fully Heavy-tailed Dynamics Xingyu Wang (University of Amsterdam) and Mengfan Xu (University of Massachusetts Amherst) Program Track: Simulation Optimization Abstract AbstractWe study decentralized multi-agent multi-armed bandits, where clients communicate over sparse random graphs with heavy-tailed degree distributions and observe heavy-tailed reward distributions with potentially infinite variance. We are the first to address such fully heavy-tailed scenarios, capturing the dynamics and challenges of communication and inference among multiple clients in real-world systems, and provide regret bounds that match or improve upon existing results developed in even simpler settings. Under homogeneous rewards, we exploit hub-like structures unique to heavy-tailed graphs to aggregate rewards and reduce noises when constructing UCB indices; under $M$ clients and degree distributions with power-law index $\alpha > 1$, we attain a regret (almost) of order $O(M^{1 -\frac{1}{\alpha}} \log{T})$. Under heterogeneous rewards,
clients synchronize by communicating with neighbors and aggregating exchanged estimators in UCB indices; by establishing information delay bounds over sparse random graphs, we attain a $O(M \log{T})$ regret. pdfGeneral-Purpose Ranking and Selection for Stochastic Simulation with Streaming Input Data Jaime Gonzalez-Hodar and Eunhye Song (Georgia Institute of Technology) Program Track: Simulation Optimization Program Tags: Data Driven, Metamodeling, Ranking and Selection Abstract AbstractWe study ranking and selection (R&S) where the simulator’s input models are increasingly more precisely estimated from the streaming data obtained from the system. The goal is to decide when to stop updating the model and return the estimated optimum with a probability of good selection (PGS) guarantee. We extend the general-purpose R&S procedure by Lee and Nelson by integrating a metamodel that represents the input uncertainty effect on the simulation output performance measure. The algorithm stops when the estimated PGS is no less than 1−α accounting for both prediction error in the metamodel and input uncertainty. We then propose an alternative procedure that terminates significantly earlier while still providing the same (approximate) PGS guarantee by allowing the performance measures of inferior solutions to be estimated with lower precision than those of good solutions. Both algorithms can accommodate nonparametric input models and/or performance measures other than the means (e.g., quantiles). pdfExplore then Confirm: Investment Portfolios for New Drug Therapies Zaile Li and Stephen Chick (INSEAD); Sam Daems (Université Libre de Bruxelles, Waterland Private Equity); and Shane Henderson (Cornell University) Program Track: Simulation Optimization Abstract AbstractNew medical technologies must pass several risky hurdles, such as multiple phases of clinical trials, before market access and reimbursement. A portfolio of technologies pools these risks, reducing the collective financial risk of such development while also improving the chances of identifying a successful technology. We propose a stylized model of a portfolio of technologies, each of which must pass one or two phases of clinical trials before market access is possible. Using ideas from Bayesian sequential optimization, we study the value of running response-adaptive clinical trials to flexibly allocate resources across clinical trials for technologies in a portfolio. We suggest heuristics for the response-adaptive policy and find evidence for their value relative to non-adaptive policies. pdf Data Driven, Monte Carlo, Simulation OptimizationOptimization with Risk Measures Session Chair: Zhaolin Hu (Tongji University) Efficient Optimization Procedures For CVaR-Constrained Optimization with Regularly Varying Risk Factors Anish Senapati and Jose Blanchet (Stanford University); Fan Zhang (Stanford University, Two Sigma); and Bert Zwart (Eindhoven University of Technology, Center of Mathematics and Computer Science) Program Track: Simulation Optimization Abstract AbstractWe study chance-constrained optimization problems (CC-OPT) with regularly varying distributions where
risk is measured through Conditional Value-at-Risk. The usual probabilistic constraints within CCOPTs
have limitations in modeling and tractability, motivating a less constrained conditional value-at-risk CCOPT. We design a stochastic gradient descent-type algorithm to solve this relaxation, combining techniques
and theory from the optimization and rare-event simulation literature. Rare-event simulation techniques
and a precise preconditioning motivated through an epi-convergence argument were employed to find the
optimal solution as the chance constraints become tighter. We show that our method does not depend on
the constraints’ rarity for regularly varying distributions. Theoretical and numerical results concerning two
chance-constrained problems illustrate the advantages of our new method over classical stochastic gradient
descent methods with a near-constant runtime complexity as a function of the rarity parameter. pdfQuantile-Boosted Stochastic Approximation Best Contributed Theoretical Paper - Finalist Jinyang Jiang (Peking University), Bernd Heidergott (Vrije Universiteit Amsterdam), and Yijie Peng (Peking University) Program Track: Simulation Optimization Program Tag: Data Driven Abstract AbstractStochastic approximation (SA) offers a recursive framework for tracking the quantiles of a parameterized system’s output distribution using observed samples. In this paper, we employ SA-based quantile trackers to approximate the gradient of an objective function and integrate them into a unified SA scheme for finding stationary points. The proposed gradient estimation framework accommodates both finite-difference and score-function methods. Our method allows for dynamically adjusting the number of trackers within a single optimization run. This adaptability enables more efficient and accurate approximation of the true objective gradient. The resulting single time-scale estimator is also applicable to stationary performance measures. Numerical experiments confirm the effectiveness and robustness of the proposed approach. pdfStatistical Properties of Mean-Variance Portfolio Optimization Zhaolin Hu (Tongji University) Program Track: Simulation Optimization Program Tag: Monte Carlo Abstract AbstractWe study Markowitz’s mean-variance portfolio optimization problem. When practically using this model, the mean vector and the covariance matrix of the assets returns often need to be estimated from the sample data. The sample errors will be propagated to the optimization output. In this paper, we consider three commonly used mean-variance models and build the asymptotic properties for the conventional sample approximations that are widely adopted and studied, by leveraging the stochastic optimization theory. We show that for all three models, under certain conditions the sample approximations have the desired consistency and achieve a convergence rate of square root of sample size, and the asymptotic variance depends on the first four moments of the returns. We conduct numerical experiments to test the asymptotic properties for the estimation. We also conduct experiments to illustrate that the asymptotic normality might not hold when the fourth moments of the returns do not exist. pdf Data Driven, Monte Carlo, Simulation OptimizationOnline Decision Making and Prediction Session Chair: Robert James Lambert (Lancaster University) Parallel Simulation-based Prediction in Discrete-time Discrete-state-space Markov Chains with GPU Implementation Yifu Tang (University of California, Berkeley); Peter W. Glynn (Stanford University); and Zeyu Zheng (University of California, Berkeley) Program Track: Simulation Optimization Abstract AbstractEffectively predicting the future performance of a stochastic system via simulation is a critical need. In particular, a challenging class of tasks arises when the stochastic system has an underlying unobservable Markov chain that drives the dynamics, whereas one only gets to partially observe through a function of the underlying Markov process. Lim and Glynn (2023) proposed a long-run estimator to predict the future performance system of a stationary system given currently observed states. However, it can be challenging to run a long-run estimator in parallel. In this work, we propose a modified estimator that is easier to simulate in parallel, especially in a way that fits parallel simulation via Graphics Processing Units (GPUs). We discuss implementation procedures on GPUs and then analyze the asymptotic convergence behavior of the parallel estimator and its corresponding central limit theorem. pdfNeural Network-Based Methods for Continuous Simulation Optimization Problems with Covariates Yize Hao and Guangxin Jiang (Harbin Institute of Technology) Program Track: Simulation Optimization Abstract AbstractSimulation-based methods for real-time decision-making have attracted increasing research attention, and such problems are usually formulated as the simulation optimization problem with covariates. There are generally two methods to address this problem. The first builds a relationship between the objective function and the covariates, allowing a solver to quickly find the solution. The second directly builds a relationship between the optimal solution and the covariates. In this paper, we focus on continuous simulation optimization with covariates and investigate neural network-based implementations of both methods. We demonstrate that even when the objective function is continuous, the optimal solution may exhibit discontinuities with respect to the covariates, limiting the applicability of the optimal solution method. In contrast, the objective function method remains effective and broadly applicable. We further establish stability conditions under which the optimal solution method is valid. Numerical experiments are conducted to support our theoretical findings. pdfOptimization of Queueing Systems Using Streaming Simulation Robert James Lambert, James Grant, and Rob Shone (Lancaster University) and Roberto Szechtman (Naval Postgraduate School) Program Track: Simulation Optimization Program Tags: Data Driven, Monte Carlo Abstract AbstractWe consider the problem of adaptively determining the optimal number of servers in an M/G/c queueing system in which the unknown arrival rate must be estimated using data that arrive sequentially over a series of observation periods. We propose a stochastic simulation-based approach that uses iteratively updated parameters within a greedy decision-making policy, with the selected number of servers minimising a Monte Carlo estimate of a chosen objective function. Under minimal assumptions, we derive a central limit theorem for the Monte Carlo estimator and derive an asymptotic bound on the probability of incorrect selection of the policy. We also demonstrate the empirical performance of the policy in a finite-time numerical experiment. pdf Metamodeling, Monte Carlo, Python, Sampling, Simulation OptimizationAlgorithmic Advances in Simulation Optimization I Session Chair: Yunsoo Ha (National Renewable Energy Laboratory) A New Stochastic Approximation Method for Gradient-based Simulated Parameter Estimation Zehao Li and Yijie Peng (Peking University) Program Track: Simulation Optimization Program Tags: Monte Carlo, Sampling Abstract AbstractThis paper tackles the challenge of parameter calibration in stochastic models, particularly in scenarios where the likelihood function is unavailable in an analytical form. We introduce a gradient-based simulated parameter estimation framework, which employs a multi-time scale stochastic approximation algorithm. This approach effectively addresses the ratio bias that arises in both maximum likelihood estimation and posterior density estimation problems. The proposed algorithm enhances estimation accuracy and significantly reduces computational costs, as demonstrated through extensive numerical experiments. Our work extends the GSPE framework to handle complex models such as hidden Markov models and variational inference-based problems, offering a robust solution for parameter estimation in challenging stochastic environments. pdfASTROMoRF: Adaptive Sampling Trust-Region Optimization with Dimensionality Reduction Benjamin Wilson Rees, Christine S.M. Currie, and Vuong Phan (University of Southampton) Program Track: Simulation Optimization Program Tags: Metamodeling, Python Abstract AbstractHigh dimensional simulation optimization problems have become prevalent in recent years. In practice, the objective function is typically influenced by a lower dimensional combination of the original decision variables, and implementing dimensionality reduction can improve the efficiency of the optimization algorithm. In this paper, we introduce a novel algorithm ASTROMoRF that combines adaptive sampling with dimensionality reduction, using an iterative trust-region approach. Within a trust-region algorithm a series of surrogates or metamodels is built to estimate the objective function. Using a lower dimensional subspace reduces the number of design points needed for building a surrogate within each trust-region and consequently the number of simulation replications. We explain the basis for the algorithm within the paper and compare its finite-time performance with other state-of-the-art solvers. pdfMulti-Fidelity Stochastic Trust Region Method with Adaptive Sampling Yunsoo Ha and Juliane Mueller (National Renewable Energy Laboratory) Program Track: Simulation Optimization Program Tag: Monte Carlo Abstract AbstractSimulation optimization is often hindered by the high cost of running simulations. Multi-fidelity methods offer a promising solution by incorporating cheaper, lower-fidelity simulations to reduce computational time. However, the bias in low-fidelity models can mislead the search, potentially steering solutions away from the high-fidelity optimum. To overcome this, we propose ASTRO-MFDF, an adaptive sampling trust-region method for multi-fidelity simulation optimization. ASTRO-MFDF features two key strategies: (i) it adaptively determines the sample size and selects { appropriate sampling strategies to reduce computational cost}; and (ii) it selectively uses low-fidelity information only when a high correlation with the high-fidelity is anticipated, reducing the risk of bias. We validate the performance and computational efficiency of ASTRO-MFDF through numerical experiments using the SimOpt library. pdf DOE, Simulation OptimizationAlgorithmic Advances in Simulation Optimization II Session Chair: Di Yu (Purdue University) Monte Carlo Digital Twin Worldview: A Proposal Susan R. Hunter, Raghu Pasupathy, and Bruce W. Schmeiser (Purdue University) Program Track: Simulation Optimization Abstract AbstractWe propose a worldview to facilitate the discussion of how Monte Carlo simulation experiments are conducted in a digital twin context. We use the term "Monte Carlo digital twin" to refer to a digital twin in which a Monte Carlo simulation model mimics the physical twin. The worldview we propose encompasses Nelson and Schmeiser's classical worldview for simulation experiments and adds key components of a Monte Carlo digital twin including a predictor, a decision maker, and bidirectional interaction between the Monte Carlo digital twin and the physical twin. We delineate the proposed worldview components in several examples. pdfDesigning a Frank-Wolfe Algorithm for Simulation Optimization Over Unbounded Linearly Constrained Feasible Regions Natthawut Boonsiriphatthanajaroen and Shane Henderson (Cornell University) Program Track: Simulation Optimization Abstract AbstractThe linearly constrained simulation optimization problem entails optimizing an objective function that is evaluated, approximately, through stochastic simulation, where the finite-dimensional decision variables lie in a feasible region defined by known, deterministic linear constraints. We assume the availability of unbiased gradient estimates. When the feasible region is bounded, existing algorithms are highly effective. We attempt to extend existing algorithms to also allow for unbounded feasible regions. We extend both the away-step (AFW) and boosted Frank-Wolfe (BFW) algorithms. Computational experiments compare these algorithms with projected gradient descent (PGD). An extension of BFW performs the best in our experiments overall, performing substantially better than both PGD and AFW. Moreover, PGD substantially outperforms AFW. We provide commentary on our experimental results and suggest avenues for further algorithm development. The article also showcases the use of the SimOpt library in algorithm development. pdfThe Derivative-Free Fully-Corrective Frank-Wolfe Algorithm for Optimizing Functionals Over Probability Spaces Best Contributed Theoretical Paper - Finalist Di Yu (Purdue University), Shane G. Henderson (Cornell University), and Raghu Pasupathy (Purdue University) Program Track: Simulation Optimization Program Tag: DOE Abstract AbstractThe challenge of optimizing a smooth convex functional over probability spaces is highly relevant in experimental design, emergency response, variations of the problem of moments, etc. A viable and provably efficient solver is the fully-corrective Frank-Wolfe (FCFW) algorithm. We propose an FCFW recursion that rigorously handles the zero-order setting, where the derivative of the objective is known to exist, but only the objective is observable. Central to our proposal is an estimator for the objective’s influence function, which gives, roughly speaking, the directional derivative of the objective function in the direction of point mass probability distributions, constructed via a combination of Monte Carlo, and a projection onto the orthonormal expansion of an L2 function on a compact set. A bias-variance analysis of the influence function estimator guides step size and Monte Carlo sample size choice, and helps characterize the recursive rate behavior on smooth non-convex problems. pdf Simulation OptimizationPanel: Simulation Optimization 2050 Session Chair: David J. Eckman (Texas A&M University) Simulation Optimization 2050 and Beyond Soumyadip Ghosh (IBM TJ Watson Research Center); Peter Haas (University of Massachusetts, Amherst); L. Jeff Hong (University of Minnesota, Minneapolis); Jonathan Ozik (Argonne National Laboratory); and Benjamin Thengvall (OptTek Systems, Inc.) Program Track: Simulation Optimization Abstract AbstractThe goal of this panel was to envision the future of simulation optimization research and practice over the next 25 years. The panel was composed of five simulation researchers from academia, industry, and research laboratories who shared their perspectives on the challenges and opportunities facing the field in light of contemporary advances in artificial intelligence, machine learning and computing hardware. The panelists also discussed the role simulation optimization can, should, and will play in supporting future decision-making under uncertainty. This paper serves as a collection of the panelists’ prepared statements. pdf Data Driven, Python, Sampling, Simulation OptimizationBlack-Box Optimization Session Chair: Yuhao Wang (Georgia Institute of Technology) A Simulation Optimization Approach to Optimal Experimental Design for Symbolic Discovery Kenneth Clarkson, Soumyadip Ghosh, Joao Goncalves, Rik Sengupta, Mark Squillante, and Dmitry Zubarev (IBM Research) Program Track: Simulation Optimization Abstract AbstractSymbolic discovery aims to discover functional relationships in scientific data gathered from a black-box oracle. In general, the mapping between oracle inputs and its response is constructed by hierarchically composing simple functions. In this study, we restrict ourselves to the case of selecting the most representative model from among a predefined finite set of model classes. The user is given the ability to sequentially generate data by picking inputs to query the oracle. The oracle call expends significant effort (e.g., computationally intensive simulation models), and so each input needs to be carefully chosen to maximize the information in the response. We propose an optimal experimental design formulation to sequentially identify the oracle query inputs and propose a simulation optimization algorithm to solve this problem. We present preliminary results from numerical experiments for a specific symbolic discovery problem in order to illustrate the working of the proposed algorithm. pdfTESO: Tabu‐Enhanced Simulation Optimization for Noisy Black-Box Problems Bulent Soykan, Sean Mondesire, and Ghaith Rabadi (University of Central Florida) Program Track: Simulation Optimization Program Tag: Python Abstract AbstractSimulation optimization (SO) is frequently challenged by noisy evaluations, high computational costs, and complex, multimodal search landscapes. This paper introduces Tabu-Enhanced Simulation Optimization (TESO), a novel metaheuristic framework integrating adaptive search with memory-based strategies. TESO leverages a short-term Tabu List to prevent cycling and encourage diversification, and a long-term Elite Memory to guide intensification by perturbing high-performing solutions. An aspiration criterion allows overriding tabu restrictions for exceptional candidates. This combination facilitates a dynamic balance between exploration and exploitation in stochastic environments. We demonstrate TESO's effectiveness and reliability using an queue optimization problem, showing improved performance compared to benchmarks and validating the contribution of its memory components. pdfNested Denoising Diffusion Sampling for Global Optimization Yuhao Wang (Georgia Institute of Technology), Haowei Wang (National University of Singapore), Enlu Zhou (Georgia Institute of Technology), and Szu Hui Ng (National University of Singapore) Program Track: Simulation Optimization Program Tags: Data Driven, Sampling Abstract AbstractWe propose a novel algorithm, Nested Denoising Diffusion Sampling (NDDS), for solving deterministic global optimization problems where the objective function is a black box—unknown, possibly non-differentiable, and expensive to evaluate. NDDS addresses this challenge by leveraging conditional diffusion models to efficiently approximate the evolving solution distribution without incurring the cost of extensive function evaluations. Unlike existing diffusion-based optimization methods that operate in offline settings and rely on manually specified conditioning variables, NDDS systematically generates these conditioning variables through a statistically principled mechanism. In addition, we introduce a data reweighting strategy to address the distribution mismatch between the training data and the target sampling distribution. Numerical experiments demonstrate that NDDS consistently outperforms the Extended Cross-Entropy (CE) method under the same function evaluation budget, particularly in high-dimensional settings. pdf
Uncertainty Quantification and Robust Simulation Track Coordinator - Uncertainty Quantification and Robust Simulation: Ilya Ryzhov (University of Maryland), Wei Xie (Northeastern University) Input Modeling, Uncertainty Quantification and Robust SimulationStatistical Inference and Estimation Session Chair: Jingtao Zhang (Virginia Tech) Quantifying Uncertainty from Machine Learning Surrogate Models Embedded in Simulation Models Mohammadmahdi Ghasemloo and David J. Eckman (Texas A&M University) and Yaxian Li (Intuit) Program Tag: Input Modeling Abstract AbstractModern simulation models increasingly feature complex logic intended to represent how tactical decisions are made with advanced decision support systems (DSSs). For a variety of reasons, e.g., concerns about computational cost, data privacy, and latency, users might choose to replace DSSs with approximate logic within the simulation model. This paper investigates the impacts of replacing DSSs with machine learning surrogate models on the estimation of system performance metrics. We distinguish this so-called surrogate uncertainty from conventional input uncertainty and develop approaches for quantifying the error introduced by the use of surrogate models. Specifically, we explore bootstrapping and Bayesian model averaging methods for obtaining quantile-based confidence intervals for expected performance measures and propose using regression-tree importance scores to apportion the overall uncertainty across input and surrogate models. We illustrate our approach through a contact-center simulation experiment. pdfA Model-Free, Partition-based Approach to Estimating Sobol' Indices from Existing Datasets Jingtao Zhang and Xi Chen (Virginia Tech) Abstract AbstractThis paper investigates a model-free, partition-based method for estimating Sobol' indices using existing datasets, addressing the limitations of traditional variance-based global sensitivity analysis (GSA) methods that rely on designed experiments. We provide a theoretical analysis of the bias, variance, and mean squared error (MSE) associated with the partition-based estimator, exploring the effects of the sample size of the dataset and the number of partition bins on its performance. Furthermore, we propose a data-driven approach for determining the optimal number of bins to minimize the MSE. Numerical experiments demonstrate that the proposed partition-based method outperforms state-of-the-art GSA techniques. pdfEfficient Uncertainty Quantification of Bagging via the Cheap Bootstrap Arindam Roy Chowdhury and Henry Lam (Columbia University) Abstract AbstractBagging has emerged as an effective tool for reducing variance and enhancing stability in model training, via repeated data resampling followed by a suitable aggregation. Recently, it has also been used to obtain performance bounds for data-driven solutions in stochastic optimization. However, quantifying statistical uncertainty for bagged estimators can be challenging, as standard bootstrap would require resampling at both the bagging and the bootstrap stages—-leading to multiplicative computation costs that can be prohibitively large. In this work, we propose a practical and theoretically justified approach using the cheap bootstrap methodology, which enables valid confidence interval construction for bagged estimators under a controllable number of model evaluation. We establish asymptotic validity of our approach and demonstrate its empirical performance through simulation experiments. Our results show that the proposed method achieves nominal coverages with significantly reduced computational burden than other benchmarks. pdf Uncertainty Quantification and Robust SimulationActive Learning and Digital Twins Session Chair: Wei Xie (Northeastern University) Active Learning for Manifold Gaussian Process Regression Yuanxing Cheng (Illinois Institute of Technology); Lulu Kang (University of Massachusetts Amherst); Yiwei Wang (University of California, Riverside); and Chun Liu (Illinois Institute of Technology) Program Track: Uncertainty Quantification and Robust Simulation Abstract AbstractThis paper introduces an active learning framework for manifold Gaussian Process (GP) regression, combining manifold learning with strategic data selection to improve accuracy in high-dimensional spaces. Our method jointly optimizes a neural network for dimensionality reduction and a GP regressor in the latent space, supervised by an active learning criterion that minimizes global prediction error. Experiments on synthetic data demonstrate superior performance over randomly sequential learning. The framework efficiently handles complex, discontinuous functions while preserving computational tractability, offering practical value for scientific and engineering applications. Future work will focus on scalability and uncertainty-aware manifold learning. pdfDynamic Calibration Framework for Digital Twins Using Active Learning and Conformal Prediction Ozge Surer (Miami University), Xi Chen (Virginia Tech), and Sara Shashaani (North Carolina State University) Program Track: Uncertainty Quantification and Robust Simulation Abstract AbstractThis work presents an adaptive framework for dynamically calibrating digital twins (DTs) in response to evolving real-world (RW) conditions. Traditional simulation-based models often rely on fixed parameter estimates, limiting their adaptability over time. To address this, our approach integrates active learning (AL) with a dynamic calibration mechanism that keeps the DT aligned with RW observations. At each time step, a new data batch is received, and a conformal prediction-based monitoring system assesses whether recalibration is needed. When a change in the RW system state is detected, DT parameters are updated using an efficient AL strategy. The framework reduces computational overhead by avoiding unnecessary DT evaluations while maintaining accurate system representation. We demonstrate the effectiveness of the proposed approach in achieving adaptive, cost-efficient DT calibration over time. pdfA Symbolic and Statistical Learning Framework to Discover Bioprocessing Regulatory Mechanism: Cell Culture Example Keilung Choy, Wei Xie, and Keqi Wang (Northeastern University) Program Track: Uncertainty Quantification and Robust Simulation Abstract AbstractBioprocess mechanistic modeling is essential for advancing intelligent digital twin representation of biomanufacturing, yet challenges persist due to complex intracellular regulation, stochastic system behavior, and limited experimental data. This paper introduces a symbolic and statistical learning framework to identify key regulatory mechanisms and quantify model uncertainty. Bioprocess dynamics is formulated with stochastic differential equations characterizing intrinsic process variability, with a predefined set of candidate regulatory mechanisms constructed from biological knowledge. A Bayesian learning approach is developed, which is based on a joint learning of kinetic parameters and regulatory structure through a formulation of the mixture model. To enhance computational efficiency, a Metropolis-adjusted Langevin algorithm with adjoint sensitivity analysis is developed for posterior exploration. Compared to state-of-the-art posterior sampling approaches, the proposed framework achieves improved sample efficiency and robust model selection. A cell culture simulation study demonstrates its ability to recover missing regulatory mechanisms and improve model fidelity under data-limited situations. pdf Data Driven, Python, Ranking and Selection, Analysis Methodology, Uncertainty Quantification and Robust SimulationStatistical Estimation and Performance Analysis Session Chair: Sara Shashaani (North Carolina State University) Distributionally Robust Logistic Regression with Missing Data Weicong Chen and Hoda Bidkhori (George Mason University) Program Track: Uncertainty Quantification and Robust Simulation Program Tags: Data Driven, Python Abstract AbstractMissing data presents a persistent challenge in machine learning. Conventional approaches often rely on data imputation followed by standard learning procedures, typically overlooking the uncertainty introduced by the imputation process. This paper introduces Imputation-based Distributionally Robust Logistic Regression (I-DRLR)—a novel framework that integrates data imputation with class-conditional Distributionally Robust Optimization (DRO) under the Wasserstein distance. I-DRLR explicitly models distributional ambiguity in the imputed data and seeks to minimize the worst-case logistic loss over the resulting uncertainty set. We derive a convex reformulation to enable tractable optimization and evaluate the method on the Breast Cancer and Heart Disease datasets from the UCI Repository. Experimental results demonstrate consistent improvements for out-of-sample performance in both prediction accuracy and ROC-AUC, outperforming traditional methods that treat imputed data as fully reliable. pdfWorst-case Approximations for Robust Analysis in Multiserver Queues and Queuing Networks Hyung-Khee Eun and Sara Shashaani (North Carolina State University) and Russell Barton (Pennsylvania State University) Program Track: Uncertainty Quantification and Robust Simulation Abstract AbstractThis study explores strategies for robust optimization of queueing performance in the presence of input model uncertainty. Ambiguity sets for Distributionally Robust Optimization (DRO) based on Wasserstein distance is preferred for general DRO settings where the computation of performance given the distribution form is straightforward. For complex queueing systems, distributions with large Wasserstein distance (from the nominal distributions) do not necessarily provide extreme objective values. Thus, the calculation of performance extremes must be done via an inner level of maximization, making DRO a compute-intensive activity. We explore approximations for queue waiting time in a number of settings and show how they can provide low-cost guidance on extreme objective values, allowing for more rapid DRO. Approximations are provided for single- and multi-server queues and queueing networks, each illustrated with an example. We also show in settings with small number of solution alternatives that these approximations lead to robust solutions. pdfRevisiting an Open Question in Ranking and Selection Under Unknown Variances Best Contributed Theoretical Paper - Finalist Jianzhong Du (University of Science and Technology of China), Siyang Gao (City University of Hong Kong), and Ilya O. Ryzhov (University of Maryland) Program Track: Analysis Methodology Program Tag: Ranking and Selection Abstract AbstractExpected improvement (EI) is a common ranking and selection (R&S) method for selecting the optimal system design from a finite set of alternatives. Ryzhov (2016) observed that, under normal sampling distributions with known variances, the limiting budget allocation achieved by EI was closely related to the theoretical optimum. However, when the variances are unknown, the behavior of EI was quite different, giving rise to the question of whether the optimal allocation in this setting was totally distinct from the known-variance case. This research solves that problem with a new analysis that can distinguish between known and unknown variance, unlike previously existing theoretical frameworks. We derive a new optimal budget allocation for this setting, and confirm that the limiting behavior of EI has a similar relationship to this allocation as in the known-variance case. pdf
Track Coordinator - Professional Development: Caroline C. Krejci (The University of Texas at Arlington), Navonil Mustafee (University of Exeter, The Business School) Professional DevelopmentFrom Submission to Publication: Meet the Editors of Simulation Journals Session Chair: Navonil Mustafee (University of Exeter, The Business School) From Submission to Publication: Meet the Editors of Simulation Journals Philippe J. Giabbanelli (Old Dominion University), Anastasia Anagnostou (Brunel University London), Ignacio J. Martinez-Moyano (Argonne National Laboratory), Enlu Zhu (Georgia Institute of Technology), and Wentong Cai (Nanyang Technological University) Program Track: Professional Development Abstract AbstractThis interactive panel brings together editors from leading simulation journals to share insights on publishing high-quality simulation research. Panelists will clarify each journal’s scope, discuss common pitfalls, and offer practical advice on how to maximize success in the submission and review process. Attendees will gain perspectives on emerging topics sought by journals, best practices for engaging with editors and reviewers, and tips for developing impactful papers. The session will include opportunities for audience interaction, questions, and direct feedback from editors to support professional development in scholarly publishing. pdf Professional DevelopmentExploring Modeling and Simulation Career Paths in Industry Session Chair: Caroline C. Krejci (The University of Texas at Arlington) Exploring Modeling and Simulation Career Paths in Industry Nelson Alfaro Rivas (MOSIMTEC, LLC); Abhilash Deori and Abhineet Mittal (Amazon); Jae Lee (CrowdStrike); and Shelby Seifer (Integrated Insight) Program Track: Professional Development Abstract AbstractThis panel will provide diverse perspectives on modeling and simulation (M&S) careers in industry from panelists with a variety different professional backgrounds and experiences. Panelists will describe their own career paths and will share their views on how M&S in industry compares to academia, how it differs across industries and organizations, and what the future holds, with respect to challenges, opportunities, and the evolving role of modelers in an era of rapid technological change. Attendees will gain practical insights into how they can find challenging and meaningful M&S job opportunities, what companies are looking for, how applicants can set themselves apart, and ways of sustaining professional growth throughout their careers. While this session is aimed primarily at students and early-career modelers, it will offer modelers at any stage of their careers an opportunity to ask questions and learn more about how M&S is being implemented in industry.
Track Coordinator - Vendor: Amy Greer (MOSIMTEC, LLC), Nathan Ivey (Rockwell Automation Inc.) VendorVendor Workshop: Minitab Building Smarter Simulations with Minitab and Simul8 Christoph Werner Abstract AbstractOur workshop introduces you to Minitab’s discrete event simulation tool, Simul8. For anyone unfamiliar with Simul8, we give you a brief introduction to its interface and how to quickly build simulations and obtain useful results. You are welcome to build along with us by bringing along your laptop and using the trial version. We then showcase some more advanced examples of our past work, including the integration with live data and how to elevate your simulation to a digital twin, as well as quickly updating and self-building parts of your simulation. Finally, we present some content for existing or interested Minitab users and how other Minitab products can work with Simul8. VendorVendor Workshop: Simio LLC, Session I Simio Modeling Foundations for New and Prospective Users Lauren Solmssen (Simio LLC) Abstract AbstractThis workshop introduces new users to the fundamental concepts of modeling in Simio. Learn how you can use Simio’s intuitive interface to quickly create a simple model, giving you a sense of how discrete event simulations work. Learn how your model could be the basis for a digital twin that can be used for operational decision making. Get a glance at the more advanced capabilities of Simio that allow you to visualize and optimize your system. This workshop is ideal for professionals evaluating Simio for implementation in their organization and for academics considering using Simio for teaching and research. VendorVendor Workshop: The AnyLogic Company AnyLogic Workshop Andrei Borshchev and Colin Macy (The AnyLogic Company) Abstract AbstractJoin us for a 90-minute tour of the world’s most advanced ecosystem for simulation and digital twinning, adopted by 40% of Fortune 100 companies. We’ll start with where simulation fits within the technology stack of Digital Transformation and discuss the current industry requirements for simulation tools. From there, we’ll go with you through the full lifecycle of a simulation model – from conceptual modeling and animation design to experiment design, deployment, integration, and version management. With the latest AnyLogic technology now available directly in the browser, you’ll have the option to try it yourself or simply sit back and watch. We will also touch on the relationship between Simulation and AI, and showcase some of the AnyLogic developments in this area. VendorVendor Workshop: GreenNet OptiSim The Balance of Power – Diesel vs Electrification for Mines Jacob Bennetts (GreenNet Optisim, Innovation Industries) and Adam Bardsley (GreenNet Optisim) Abstract AbstractThis interactive workshop explores the future of mine haulage by comparing diesel and electrified fleets through advanced simulation. Participants will gain hands-on insights into:
Fuel burn and diesel logistics – modelling haul truck consumption, refuelling cycles, and the impact of idle time.
Battery sizing and duty cycles – understanding energy demand, battery lifetime, and how vehicle range changes with route, load, and grade.
Refuelling vs charging infrastructure – simulating pit-to-plant energy requirements, station placement, and dwell times for both diesel refuelling and electric charging.
Renewable energy integration – assessing the role of on-site solar, wind, or hybrid generation in supporting mine fleet decarbonisation.
Scenario comparisons – testing diesel, hybrid, and full battery-electric options to highlight trade-offs in cost, productivity, and carbon emissions.
By the end of the session, attendees will understand how GreenNet OptiSim can model whole-of-mine energy strategies, identify infrastructure bottlenecks, and support confident decisions on fleet transition pathways. VendorVendor Workshop: Veydra.io AI-enabled Simulation: Building Smart Simulations Faster Alton Alexander Abstract AbstractThis fast-paced tutorial walks through a modern simulation workflow—from problem framing to automated calibration and policy evaluation—powered by the Veydra platform. It’s an ideal prelude to the main conference presentation, focusing on the tools and principles that make decision intelligence practical, transparent, and scalable. VendorVendor Workshop: Simio LLC, Session II Expanding Simulation Capabilities with Python Integration in Simio Paul Glaser (Simio LLC) Abstract AbstractThis workshop explores three practical approaches to integrating Python's analytical capabilities with Simio's simulation environment. Participants will gain hands-on experience implementing Python-based data processing, custom logic, and machine learning integration within Simio models. The session demonstrates how Python integration enhances Simio's native capabilities through: (1) preprocessing external data sources, (2) extending model logic with specialized algorithms, and (3) integrating predictive analytics within simulation workflows. Each integration technique is illustrated with practical examples drawn from manufacturing, healthcare, and logistics applications. Attendees will learn implementation best practices, performance considerations, and approaches for maintaining synchronized execution between Python and Simio runtime environments. VendorVendor Workshop: Arena by Rockwell Automation Supercharge Your Arena Modeling and Analysis with Python Nathan Ivey (Rockwell Automation Inc., Arena Simulation) Abstract AbstractLearn ways to automate your Arena model building and analysis with Python. By accessing Arena’s object model via Python, you can automatically build, modify, and manipulate Arena models opening the door to using external Python libraries for machine learning, optimization, and more. We’ll show examples that will enable you to build simulation models from data in an external repository, efficiently place animation objects, and link to external Python libraries for enhanced analysis. VendorVendor Workshop: TwinAI Inc Simini: GenAI-Powered Simulation Modeling Sheetal Naik (TwinAI Inc) Abstract AbstractSimini is a full stack AI-powered simulation platform that redefines how models are created, explored, and analyzed. By combining large language models with an intuitive interface, it enables users to describe systems in plain language and automatically generate simulation models that can be refined through guided interaction. The platform unifies intelligent model creation, interactive scenario exploration, and automated insight generation in a single environment. In this vendor workshop, attendees will get an exclusive first look at Simini’s capabilities through a live demonstration of natural language model generation and scenario exploration in the interactive Playground. Participants will experience how Simini simplifies model development, enhances learning, and makes simulation more accessible, efficient, and engaging across diverse applications. VendorSimulation LIfecycle Session Chair: Andrei Borshchev (The AnyLogic Company) Simini: Integrating Generative AI into the Full Simulation Lifecycle Sheetal Naik and Mohammad Dehghani (TwinAI Inc) Abstract AbstractSimini is a full stack AI powered Discrete Event Simulation (DES) platform that brings generative AI capabilities to the field of simulation. It is built around three core pillars: (I) Intelligent Model Generation, where users describe systems in plain language and the platform creates executable simulation models; (II) Interactive Scenario Exploration, which allows users to adjust parameters, run experiments, and compare outcomes guided by AI; and (III) Insightful Interpretation, where results are automatically analyzed to identify bottlenecks, highlight key patterns, and suggest improvements. Through this integrated approach, the simulation platform streamlines model development, experimentation, and decision making in a unified and accessible environment. docx, pdfAnylogic Ecosystem for Simulation: From Development to Deployment Andrei Borshchev and Colin Macy (The AnyLogic Company) Abstract AbstractToday, AnyLogic is the standard for simulation and digital twinning across many of the world’s top businesses — not only because of the flexibility of its modeling language and its rich set of libraries (Material Handling, Process Modeling, Fluid, Pedestrian, Road Traffic, Rail, etc.), but also because the AnyLogic ecosystem supports the entire model lifecycle: from conceptual modeling and animation design to deployment, workflow integration, and version and user management. In this vendor demo, we will provide an overview of the key AnyLogic ecosystem components and capabilities, including the latest technology advancements. docx, pdf VendorSimulation for Transformative Change Session Chair: Anna Kamphaus (SimWell) How Decisions Get Made: Introducing the Decision Capabilities Map Anna Kamphaus (SimWell Inc.) Abstract AbstractOrganizations make thousands of decisions every day, but only a handful define service, cost, and resilience. The Decision Capabilities Map brings clarity by highlighting eight areas where decisions are both unique and high impact: Capital & Network Strategy, Production Scheduling, Labor Planning, Production Routing, Maintenance Planning, Inventory Staging, Demand Allocation, and Contingency Response. This session introduces the map and shows how teams can use it to guide their Decision Intelligence journey. The process begins by identifying where your organization’s decisions are unique, then assessing your current posture — whether expanding, integrating, or contracting. With that context, you can target your Decision Intelligence investments in the decision areas most likely to yield meaningful results. Jon Santavy will outline how the Decision Capabilities Map shifts the conversation from isolated projects to a structured framework for scaling better, faster decisions across the enterprise. docx, pdfEmpowering Digital Transformation With Extendsim: From Core Enhancements To Autonomous Warehouse Digital Twins Tanaji Mali and Peter Tag (ANDRITZ Inc) Abstract AbstractExtendSim, now part of ANDRITZ, continues to evolve with powerful new capabilities that elevate performance and usability across a wide range of applications. This presentation introduces key innovations: multicore analysis for faster scenario execution, ExtendSim Cloud for collaborative workflows, Python integration for advanced analytics and reinforcement learning, and a new error log for streamlined debugging. We’ll showcase an Autonomous Warehouse Digital Twin that supports both Warehouse Design Validation—throughput, capacity, cycle time, resource utilization, bottleneck identification, and scheduling strategies—and Warehouse Control System Checkout, enabling the testing of control logic, interlocks, sequences, and operational strategies in a closed-loop virtual environment. The model also facilitates operator training, maintenance and reliability planning, and experimentation with future control logic in a risk-free setting. These capabilities demonstrate how ExtendSim empowers simulation-based engineering throughout the project lifecycle—from feasibility studies and design validation to operational strategy development—driving digital transformation across industries docx, pdf VendorAI and Digital Twins Session Chair: Jennifer Cowden (BigBear.ai) Open-source Simulation – the Infrastructure Layer of Operational Ai Farzin Arsanjani and Arash Mahdavi (Simuland.ai) Abstract AbstractSimulation has long stood at the forefront of methodologies capable of addressing complex questions—and there is no doubt that the complexity of our world continues to grow at an unprecedented pace. The Winter Simulation Conference, with its more than half a century of history, stands as a testament to the depth and breadth of simulation modeling in providing real—and often the only—solutions to behaviors emerging from complex adaptive systems.
In this presentation, we will discuss Simuland.ai’s mission to bring simulation to the forefront of operational and applied AI. We will explore what it means to have access to a powerful open-source simulation framework working alongside coding agents and examine whether such integration can finally democratize simulation for a broader audience than ever before. docx, pdfBigBear.ai Digital Twin Solutions Jennifer Cowden, Aaron Nelson, and Jay Sales (BigBear.ai) Abstract AbstractBigBear.ai is at the forefront of innovation and is committed to supporting the critical infrastructure driving our nation’s competitive edge. We deploy cutting-edge Al, machine learning, and computer vision solutions to defend critical operations and win with decision advantage.
ProModel® is a powerful simulation-based predictive analytics software designed to optimize complex processes and improve decision-making across various industries, including manufacturing, warehousing, supply chain logistics, healthcare, and defense. By utilizing discrete-event simulation, ProModel® enables organizations to analyze their efficiency, forecast outcomes, and identify areas for improvement. Its advanced optimization and predictive analytics features allow users to test different scenarios, mitigate risks, and make confident, data-driven decisions.
Cloud-based ProModel.ai takes this a step further, connecting data to create an always-updated digital twin. This digital twin not only answers strategic questions but also facilitates more real-time operational decision-making. docx, pdf VendorSimulation for Holistic System Improvements Session Chair: Keyhoon Ko (VMS Global, Inc.) Holistic Capacity Planning for Semiconductor Fabs: Combining Production and Material Handling Simulations Keyhoon Ko, Seokcheol Nicholas Chang, and Young Ju Kwon (VMS Global, Inc.) and Byung-Hee Kim (VMS Solutions, Co., Ltd.) Abstract AbstractThe surge in global semiconductor demand is pushing manufacturers to expand capacity while keeping existing fabs operational. Such expansion is highly complex, requiring not only additional equipment but also careful planning of production and logistics interactions. Cleanroom space is limited, equipment is costly and tightly integrated, and automated material handling systems such as OHTs and AGVs often become bottlenecks if not properly accounted for. Traditional production simulations emphasize throughput and processing times but usually overlook logistics constraints, while logistics-only models fail to reflect production variability. To address this gap, we propose an integrated simulation framework that unifies production and logistics modeling. This approach enables holistic evaluation of layout modifications, equipment additions, and material flow scenarios, thereby supporting strategic decision-making for fab expansion. By capturing both manufacturing and logistics dynamics in a single environment, the framework provides more accurate insights into capacity planning and system performance. docx, pdfAccelerate Decision Intelligence with AI-Assisted Simulation: Introducing the Strategic Decision Lab Alton Alexander (Front Analytics Inc) Abstract AbstractWe introduce the Strategic Decision Lab, a collection of hands-on simulations powered by Veydra’s AI-assisted simulation platform for the purpose of advancing decision intelligence. The Lab enables educators to embed decision-making exercises into their curricula and equips industry teams to run rapid workshops on complex policy and strategy questions. Participants may choose to follow along with the live interactive simulations, guided by an AI assistant that accelerates model design and scenario analysis. Every attendee will leave with a free starter license to Veydra Primer so they can immediately run polished simulations and guided lessons in their browser. docx, pdf VendorSuccessful Simulation Project Approaches Session Chair: Martin Franklin (MOSIMTEC, LLC) Design at the Speed of Change: Modular Simulation Frameworks for Rapid Model Evolution Abhineet Mittal and Abhilash Deori (Amazon) Abstract AbstractModern design and automation programs evolve continuously, challenging traditional monolithic simulation approaches to keep up. This presentation explores how a modular simulation architecture built on standardized, reusable, and parameter-driven components enables faster model development, easier integration with design tools, and rapid adaptation to configuration changes. By decoupling model logic, visualization, and control layers, simulation teams can update or replace modules without re-engineering entire systems, significantly reducing development time and improving traceability. Drawing from real-world digital twin and emulation use cases, the session illustrates how modularity transforms simulation into a dynamic, scalable platform that evolves in lockstep with product and process design. docx, pdfScaling Solutions: Considerations for Simulation Projects of Various Sizes Nelson Alfaro Rivas and Martin Franklin (MOSIMTEC, LLC) Abstract AbstractSimulation is applied across a wide range of industries to solve complex operational challenges and support strategic decision-making. From optimizing agricultural systems and pharmaceutical production to improving mining operations, consumer goods supply chains, and large-scale manufacturing, simulation projects vary greatly in purpose and scale. Regardless of industry, successful simulation requires a structured and disciplined approach. This presentation outlines best practices across key phases of a simulation engagement. Real-world case studies will illustrate how early stakeholder alignment prevents misaligned expectations, while milestone-based model development supports effective verification and validation, especially in medium and large projects. Large-scale initiatives often involve multiple modelers and evolving requirements, reinforcing the need for version control, modular design, and proper documentation. Finally, long-term model sustainability depends on thoughtful handover strategies and user enablement. Practical lessons, common pitfalls, and proven success factors are shared to help teams deliver value through simulation, regardless of project size or complexity. docx, pdf VendorAdvanced Simio Features Session Chair: Jeffrey Smith (Simio LLC) Integrating Python with the Simio Portal API for Scheduling and Optimization Jeff Smith (Simio LLC) Abstract AbstractThis session demonstrates how Simio’s new Python API streamlines data handling, experimentation, scenario exploration, and enterprise integration. We begin with an overview of Simio Portal (web-based) and Simio Desktop (client software), emphasizing how their methods of use differ. Next, we show how Python scripting supports seamless use of local and remote data while connecting simulation models with enterprise systems. The presentation concludes with case studies illustrating successful applications in scheduling, scenario exploration, and optimization. docx, pdfUsing Simio Experiment Add-ins to Identify Optimal Scenarios Jeff Smith (Simio LLC) and Yongseok Jeon (North Carolina State University) Abstract AbstractThis session explores how Simio’s experiment add-ins support simulation-based optimization. We demonstrate how add-ins can be applied to identify optimal scenario configurations and discuss strategies for developing optimization-focused simulation models—a prerequisite for effective simulation-based optimization. The session also explains the requirements for developing custom add-ins, with a sample project provided to help developers create new add-ins that implement innovative optimization strategies. docx, pdf VendorBreakthrough Approaches for Design Simulations Session Chair: Amy Greer (MOSIMTEC, LLC) Optimizing Warehouse Performance: A Simulation-Driven Approach for Design Innovation Sumant Joshi, Gandhi Chidambaram, Howard Tseng, and Abhilash Deori (Amazon) Abstract AbstractIn an era of increasing warehouse complexity, this presentation focuses on simulation-driven methodology for optimizing warehouse design and operational efficiency. Through advanced computer simulation techniques, we demonstrate the development and validation of virtual prototypes that enable comprehensive evaluation of warehouse configurations and Material Handling Equipment (MHE) systems prior to implementation. The presentation introduces an integrated framework that synthesizes multiple warehouse design elements: spatial topologies, MHE specifications, control parameters, inventory management systems, and operational workflows. By enabling rapid iteration and early design validation, this approach significantly reduces development lead times, risks, and capital investment. Case studies illustrate quantifiable improvements in first-time launch quality and operational efficiency, while also revealing insights into future applications within emerging warehouse automation technologies. docx, pdf VendorSimulation and Scheduling Session Chair: Samantha Duchscherer (Applied Materials) Unified Simulation, Scheduling, and Planning Ecosystem Enhanced by AI Samantha Duchscherer and Madhu Mamillapalli (Applied Materials) Abstract AbstractApplied Materials continues to advance simulation, scheduling, and planning capabilities for smarter, faster factory operations. SmartFactory AutoSched Simulation delivers highly accurate operational forecasts and granular capacity control, enabling faster order updates, optimized equipment utilization, and streamlined WIP management. AutoSched has matured into a robust environment for reinforcement learning models that dynamically adjust dispatching and scheduling, allowing rapid responses to unexpected events and improving operational decision-making. To support broader planning needs, Applied now offers a centralized database for capacity-related master data, enabling faster and more flexible planning, further enhanced by high-level techniques to assess alignment between available capacity and forecasted demand. Applied is extending simulation and planning into a next-generation production control system with expanded out-of-the-box capabilities to help customers meet production targets more efficiently. Finally, a multilingual, LLM-powered assistant introduces natural language interaction across Applied’s solutions—validating data assumptions, generating graphs, running scenarios, and writing workflows—redefining how factories operate. docx, pdf VendorSimio Digital Twins Session Chair: Paul Glaser (Simio LLC) Enhanced Import/Export Capabilities for Digital Twin Development in Simio Paul Glaser (Simio LLC) Abstract AbstractThis presentation details Simio's expanded import/export functionality for streamlining digital twin development and maintenance. We demonstrate new capabilities for efficiently transforming CAD files, process data, and operational parameters into simulation-ready formats while preserving semantic relationships. The enhanced framework supports various data sources and formats through a unified interface that simplifies model updating for continuous alignment with physical systems. Case studies illustrate how these capabilities accelerate initial model development while supporting ongoing synchronization as production systems evolve. Implementation considerations including data validation, version control, and real-time synchronization approaches are discussed with practical guidelines for deployment in manufacturing environments. docx, pdfImmersive Visualization: Integrating Simio with Advanced Rendering Environments Paul Glaser (Simio LLC) Abstract AbstractThis session examines methodologies for integrating Simio simulation models with immersive visualization platforms to enhance stakeholder engagement and operational understanding. The presentation demonstrates techniques for exporting Simio model structures, behaviors, and simulation results to external rendering environments that support enhanced visual fidelity, interactive exploration, and multi-user collaboration. Implementation approaches for both real-time and post-simulation visualization are presented with performance benchmarks and resource requirements. Case studies from manufacturing and logistics applications illustrate how immersive visualization transforms technical models into intuitive visual experiences that improve communication with non-technical stakeholders and enable collaborative decision-making across organizational boundaries. docx, pdf
PlenaryIn Memoriam Session Chair: Robert G. Sargent (Syracuse University) In Memoriam: Thomas J. Schriber (1935–2024) Robert G. Sargent (Syracuse University) pdf PlenaryOpening Plenary: Model. Simulate. Innovate: Driving Amazon’s Fulfillment Design through Simulation Session Chair: Simon J. E. Taylor (Brunel University London) Model. Simulate. Innovate: Driving Amazon’s Fulfillment Design through Simulation Ganesh Nanaware (Amazon) Abstract AbstractThis talk explores how simulation serves as the digital backbone enabling innovation across Amazon’s fulfillment design and operations. We will discuss the mission of the simulation organization to virtually experiment, validate, and optimize complex fulfillment concepts before they are physically realized. As automation complexity grows, traditional design approaches face critical challenges in predicting system interactions, throughput impacts, and operational resilience. Simulation bridges this gap by offering a controlled yet high-fidelity environment to model, test, and refine designs at scale. The talk will walk through a range of real-world applications of simulation across the end-to-end fulfillment ecosystem spanning site-level process design, automation system validation, network and traffic flow planning, and real-time digital twin development. We will highlight how simulation-driven insights are informing key design decisions, driving process innovation, and accelerating strategic initiatives across diverse site types and automation technologies. Finally, we will reflect on the challenges and future vision for simulation to advance toward intelligent, adaptive models that mirror real-world operations in real time. Through this journey, we will see how simulation is not merely a design tool but a strategic enabler to innovate faster, smarter, and virtually, to build the fulfillment network of the future. pdf PlenaryTitans of Simulation: A Life in Simulation in the Service to Society, Past and Present – Does Simulation Have a Future? Session Chair: Simon J. E. Taylor (Brunel University London) A Life in Simulation in the Service to Society, Past and Present – Does Simulation Have a Future? Charles M. Macal (Argonne National Laboratory) Abstract AbstractJoin Chick Macal for a keynote journey through simulation’s past, present, reflecting on its societal impact and asking the provocative question: Does simulation have a future in the Age of AI?
Chick’s career has been a journey through the evolution of simulation, from the early days of digital computing to today’s advances in agent-based modeling, digital twins, and AI. Over more than five decades, he has witnessed simulation grow from its foundations in discrete-event models and system dynamics into a vital discipline for science, engineering, and decision-making. Chick’s focus has been applying simulation to problems of societal importance—public health, energy, and critical infrastructure.
In this Titans talk, he will reflect on milestones and influences that shaped both his career and the broader field, while also looking ahead. Will simulation remain central in an era dominated by AI? Can we inspire new generations to carry the field forward? Chick will argue that simulation’s future depends on how we innovate, integrate with emerging technologies, and continue to demonstrate its indispensable role in service to society. pdf PlenaryTitans of Simulation: How I Learned to Make a Living Building Models (And Why I Swear I’m Not Just Playing Video Games) Session Chair: Anastasia Anagnostou (Brunel University London) How I Learned to Make a Living Building Models (And Why I Swear I’m Not Just Playing Video Games) Matthew Hobson-Rohrer (Roar Simulation) Abstract AbstractMatt Hobson-Rohrer’s career is a masterclass in simulation technology. He’s held nearly every role possible: he built models for a decade, developed software, managed consulting teams, and eventually ran the entire AutoMod business. For 20 years with AutoMod, he climbed the ladder, later moving into business development to help Emulate3D and SIMUL8 grow their North American presence.
Today, Matt’s company, Roar Simulation, builds sophisticated digital models of material handling automation to help clients “eliminate risk” before physical systems go “live”.
After 37 years in the industry (which he insists is a legitimate job, not just playing Fortnite), Matt has some hard-won lessons to share with the WinterSim community. He will offer insights on the core principles we need to preserve as the field evolves, and his unfiltered perspective on what we should change. pdf
PosterPoster Track Lightning Presentations Session Chair: Bernd Wurth (University of Glasgow) How to (Re)inspire the Next Generation of Simulation Modelers Christoph Kogler (University of Natural Resources and Life Sciences, Vienna); Theresa Roeder (San Francisco State University); Antuela Tako (Nottingham Trent University); and Anastasia Anagnostou (Brunel University London) Abstract AbstractDespite the rising relevance of simulation in the twin transformation of sustainability and digitalization, simulation education often struggles to attract and retain learners. We introduce a generic, adaptable Constructive Alignment Simulation Framework that supports instructors in designing motivating, learner-centered, and coherently structured simulation courses. The framework emerged from the collective teaching experience of the authors in Austria, the UK, and the USA, covering a broad range of student profiles, institutional contexts, and educational levels. Building on key principles such as constructive alignment, revised Bloom’s Taxonomy, and blended learning, the framework includes structured learning objectives, modular teaching formats, and a portfolio of assessment methods. We show how individual components of the framework have already been implemented across different simulation courses and demonstrate its flexibility and modular applicability for gradual and context-sensitive adoption. pdfLeveraging User Embeddings for Improved Information Diffusion via Agent-Based Modeling Xi Zhang, Chathika Gunaratne, Robert Patton, and Thomas Potok (Oak Ridge National Laboratory) Abstract AbstractUnderstanding the cascade of behaviors influencing the actions of individuals in social networks is crucial to a multitude of application areas such as countering adversarial information operations. This paper presents a novel approach that leverages cosine similarity between user embeddings as the core mechanism of agent-based information diffusion modeling. We utilize SAGESim, a pure-Python agent-based modeling framework designed for distributed multi-GPU HPC systems, to simulate large-scale complex systems. Our methodology employs the Qwen3 embedding model to generate high-dimensional vector representations of social media users, capturing their behaviors and preferences. A cosine similarity-based influence mechanism, where agents with higher embedding similarity exhibit increased likelihood of information transmission, is evaluated. The framework enables scalable simulation of information diffusion by modeling individual agent interactions based on their semantic similarity rather than traditional network topology alone. Our approach demonstrates improved prediction accuracy by incorporating deep user representations into the agent-based modeling paradigm. pdfFrom Concept to Community: Using System Dynamics to Strengthen Adult Acute Mental Health Crisis Systems Raymond L. Smith (East Carolina University), Jeremy Fine (University of North Carolina), Elizabeth Holdsworth La (GSK), and Kristen Hassmiller Lich (University of North Carolina) Abstract AbstractU.S. psychiatric crisis systems often suffer from access to care, delayed care, and overuse of law
enforcement. Yet, decision-makers lack tools to evaluate system-wide effects of policy or resource changes. This poster presents a system dynamics model adapted for a regional mental health system based on the Anytown-MH framework developed by an American Psychiatric Association (APA) Task Force. Through stakeholder input and systems mapping, the model simulates population-level service flows across crisis teams, emergency departments, and inpatient facilities. Users interact with a visual dashboard to test strategic scenarios and observe downstream effects on arrests, ED delays, and system congestion. Results highlight trade-offs between reallocating beds and expanding mobile crisis services. This work illustrates how simulation modeling can guide evidence-informed investments in regional mental health systems. pdfHybrid Control Method for Autonomous Mobile Robots in Manufacturing Logistics Systems: Integrating Centralized Control and Decentralized Control Soyeong Bang (KAIST) and Young Jae Jang (KAIST, Daim Research) Abstract AbstractIn this paper, we investigate the optimization of operational policies for Autonomous Mobile Robots (AMR) in Automated Material Handling Systems (AMHS). We propose a hybrid control method combining centralized control with decentralized control for dynamic response to real-time operational events. The approach enables individual AMRs to make decentralized routing decisions during congestion by utilizing local traffic information. Three key factors influencing decentralized routing decisions are identified through rule-based policies. Considering those factors, we optimize which AMRs should operate under decentralized control based on spatial and task conditions. The efficiency of the proposed method was verified using Siemens Tecnomatix Plant Simulation (version 2404) simulation software. pdfMulti-Objective Probabilistic Branch-And-Bound for Bike-Sharing Storage Problem Hao Huang (Yuan Ze University), Shing Chih Tsai (National Cheng Kung University), and Chuljin Park (Hanyang University) Abstract AbstractThis study addresses bike-sharing system (BSS) storage level management using a multi-objective simulation optimization (MOSO) approach. We consider two objectives: minimizing customers unable to rent bikes and those unable to return bikes due to full stations. A discrete-event simulation of Taipei’s top ten stations evaluates system performance over one day. We develop a discrete version of the multi-objective probabilistic branch-and-bound (MOPBnB) algorithm to approximate the Pareto optimal set for this discrete MOSO problem. Convergence analysis and numerical results demonstrate the algorithm’s effectiveness in identifying trade-offs, providing practical insights for BSS storage management. pdfSimulating Donor Heart Allocation Using Predictive Analytics Lucas Zhu and Jie Xu (George Mason University) Abstract AbstractThe Organ Procurement and Transplant Network (OPTN) transplants only about one-third of potential donor hearts annually, largely due to concerns over organ quality or matching delays. To improve utilization and outcomes, we developed a discrete-event simulation model that integrates a logistic regression model to predict one-year post-transplant mortality risk and a Cox proportional hazards model to estimate waitlist mortality risk. The simulation model implements the OPTN allocation policies from January 2006 to October 2018, and allocates a donor organ to the highest priority candidate whose risk from waiting exceeded the risk of post-transplant one-year mortality. Results showed a 33% increase in hearts transplanted per donor and a 90% reduction in average waiting time. These findings suggest that predictive modeling can significantly improve allocation decisions and could be incorporated into current organ evaluation workflows. pdfInverse optimization in finite-state semi-Markov decision processes Nhi Nguyen and Archis Ghate (University of Minnesota) Abstract AbstractInverse optimization involves finding values of problem parameters that would render given values of decision variables optimal. For finite-state, finite-action Markov decision processes (MDPs), inverse optimization literature has focused on imputing two types of inputs: reward parameters and transition probabilities. All published work focuses on discrete-time MDPs. We study inverse optimization in semi-Markov decision processes (SMDPs) with continuous-time MDPs (CTMDP) as a special case. Specifically, we wish to find reward parameters that are as close as possible to given estimates and that render a given policy optimal. We utilize Bellman’s equations for the forward SMDP to present a formulation of this inverse problem. This formulation is often convex. Simulation results on a batch manufacturing problem are included. pdfIdentifying Key Attributes for the Imaginability of Persona Behavior in Citizen Persona Role-Playing Gaming Simulations Nagomi Sakai, Akinobu Sakata, and Shingo Takahashi (Waseda University) Abstract AbstractThe Societal Prototyping Design (SPD) project uses Agent-Based Social Simulation (ABSS) to support human-centered municipal policy design (Kaihara,2022). To build stakeholder trust in the ABSS model, a gaming simulation was developed where participants role-play the daily lives of virtual citizens. This study proposes a co-creative method for generating personas that enhances the imaginability of virtual citizens. Using MaxDiff analysis, we identify key attributes that support players in envisioning citizen behavior. pdfAn Integrated Optimization-simulation Framework for Zone-based Hurricane Evacuation Planning Yifan Wu and David Eckman (Texas A&M University) and Xiaofeng Nie (Fayetteville State University) Abstract AbstractWe examine the partitioning of hurricane-affected regions into zones and the optimization of evacuation planning decisions for each zone under various hurricane scenarios, aiming to expedite evacuations and mitigate traffic congestion. We propose a framework that integrates a two-stage stochastic mixed-integer programming (SMIP) model and a simulation-based optimization model using a microscopic traffic simulator. The SMIP model provides a problem-specific initialization for the simulation-based optimization. Our experiments show that this hybrid framework balances computational tractability and model fidelity. pdfLearning-based Scheduling for Stochastic Job Shop Scheduling with Mobile Robots Woo-Jin Shin, Dongyoon Oh, and Inguk Choi (Korea Advanced Institute of Science and Technology (KAIST)); Magnus Wiktorsson, Erik Flores-García, and Yongkuk Jeong (KTH Royal Institute of Technology); and Hyun-Jung Kim (Korea Advanced Institute of Science and Technology (KAIST)) Abstract AbstractThis study investigates the stochastic job shop scheduling problem with finite transportation resources, focusing on the integrated scheduling of machines and mobile robots that transfer jobs between machines. The system involves uncertainty, as both processing and transfer times are stochastic. The objective is to minimize the makespan. To address challenges of scalability and stochasticity, we propose a deep reinforcement learning (DRL) approach. In the proposed framework, each action is decomposed into two sequential sub-actions—selecting a job and assigning a mobile robot—and the agent is trained to sufficiently explore and adapt to the stochastic environment. Experimental results show that the proposed DRL method significantly outperforms various combinations of dispatching rules. pdfA New Modular Voxel-Based Methodology for Radiation Transport Simulations Using CAD Models with NEREIDA Osiris de la Caridad Núñez (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas); Mauricio Suárez-Durán (Universidad de la Costa); Hernán Asorey (piensas.xyz, Las Rozas Innova); Iván Sidelnik (Comisión Nacional de Energía Atómica, Centro Atómico Bariloche); Roberto Méndez (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas); Manuel Carretero (Gregorio Millán Institute for Fluid Dynamics, Nanoscience and Industrial Mathematics; Universidad Carlos III de Madrid); and Rafael Mayo-García (Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT)) Abstract AbstractThis work presents a new modular and automated methodology for simulating radiation transport using the Geant4 toolkit, which is a substantial part of the NEREIDA application workflow. The proposed approach starts from a geometry constructed in a CAD model, processed using FreeCAD and Python scripts that result in a GDML file (exporting to Geant4). Two complementary voxelization strategies can be applied. The first allows adaptive material allocation through nested parameterization in complex geometries. The second introduces a user-defined scoring mesh for the analysis of physical quantities in areas of interest within a facility, regardless of geometries or material boundaries. Both voxelization modes support multithreading and facilitate flexible and reproducible configuration workflows. The methodology has been integrated within the NEREIDA framework and validated by a simulation model of the Neutron Pattern Laboratory (LPN-CIEMAT). This methodology includes the possibility of introducing more realistic models of complex structures in future case studies. pdfExploring the Integration of Large Language Models and the Model Context Protocol for Automated Simulation Modeling: Feasibility Checks with a Matrix Production System Yongkuk Jeong (KTH Royal Institute of Technology) and Kiyoung Cho and Jong Hoon Woo (Seoul National University) Abstract AbstractThis paper proposes a conceptual framework for automating simulation model generation by integrating Large Language Models (LLMs) with the Model Context Protocol (MCP). The framework standardizes the interaction between generative agents and structured system data by using MCP as a context delivery mechanism for LLMs. This enables the automatic construction of discrete event simulation (DES) models through context-grounded reasoning and standardized data access. To demonstrate its potential, a conceptual sequence diagram illustrates how MCP could exchange model requirements, system data, and generated artifacts between an LLM host, MCP servers, and simulation environments. Proof-of-concept tests using simple DES models, such as an M/M/1 queuing system implemented in SimPy, evaluate the feasibility of generating executable simulation code from MCP-delivered context. Finally, a case study on the Matrix Production System (MPS) highlights the types of structured information that could be formalized via MCP, supporting scalable, context-grounded, LLM-driven simulation modeling workflows. pdfTowards Sustainable Electronics: Optimizing Automated Smartphone Disassembly Performance via Simulation Metamodeling Ahmad Attar, Natalia Hartono, Martino Luis, and Voicu Ion Sucala (University of Exeter) Abstract AbstractAddressing the urgent challenge of electronic waste, this research investigates an optimization framework for automated smartphone recycling lines. We combine discrete event simulation with response surface metamodeling to analyze and improve the throughput of high-speed disassembly. This simulation model portrays the real disassembly procedure, from pre-processing (sorting and adhesive disabling) through AI-assisted X-ray structural analysis, precision weakness creation via press-cutting, to final component separation by impact. Experimental design explores six operational parameters, revealing that sorting and freezing speeds—particularly their synergistic interaction—exert dominant influence on the system output. The derived quadratic metamodel enables rapid, resource-efficient performance prediction without repeated simulation, which eventually leads to the identification of optimal configurations for maximum efficiency. pdfMain Object-Based Simplification of Object-Centric Process Models for Simulation Seunguk Kang, Gyeunggeun Doh, and Minseok Song (POSTECH) Abstract AbstractObject-Centric Process Mining (OCPM) provides a more expressive representation of business processes by capturing interactions among object types. However, the resulting models often exhibit high structural complexity, which limits their applicability in simulation-based analyses. This study proposes a method for model simplification by identifying a main object within Object-Centric Event Logs (OCEL) and constructing a simplified Object-Centric Petri Net (OCPN) centered around it. Contextual information from related objects is embedded as attributes of the main object, thereby reducing structural complexity while preserving the semantic context of the original process. The proposed approach enhances model interpretability and improves the practical feasibility of simulation-driven process analysis. pdfSimulating Smart City Traffic with Edge Learning and Generative AI Abdolreza Abhari (Toronto Metropolitan University), Mani Sharifi (Miami University), and Sharareh Taghipour and Sina Pahlavan (Toronto Metropolitan University) Abstract AbstractNovel models and methodologies are introduced to enable a simplified simulation framework for optimizing resource usage in edge-based Deep Learning (DL) traffic control systems of smart cities. An underexplored G/G/s queuing model is employed in simulation to measure stability, alongside using the Gamma distribution for traffic patterns. Beyond simulation, this G/G/s queuing can be embedded in a scheduler for real systems to predict network delays and node utility for adaptive job assignments. We develop a unique agent-based model and a simulation environment for edge training that captures network dynamics at the granularity of DL computational units. In addition to conventional ML experiments, this simulation enables the system-level evaluation of load-balancing scenarios for federated edge learning traffic monitoring systems in localized urban zones. In a comprehensive, data-driven simulation of scalability and resource management, the agent nodes undergo real training using synthetic video frames generated by the Wasserstein Generative Adversarial Network (WGAN). pdfInteger-Based Traffic Assignment for High-Performance City-Scale Vehicle Simulations Zhe Rui Gabriel Yong (A*STAR Institute of High Performance Computing); Jordan Ivanchev (TUMCREATE, intobyte); and Vasundhara Jayaraman and Muhamad Azfar Ramli (A*STAR Institute of High Performance Computing) Abstract AbstractDeveloping realistic traffic simulations requires accurately modelling the routing and movement of personal and freight vehicles under network congestion. Since routing decisions impact traffic dynamics, capturing congestion effects is essential for credible simulation outcomes. Traffic assignment methods address this by modelling how vehicles choose routes in congested networks. While All-or-Nothing (AoN) assignment is simple and widely used, it ignores congestion and produces unrealistic traffic patterns. The Frank-Wolfe (FW) algorithm improves realism by solving the Traffic Assignment Problem under user equilibrium, but it outputs continuous flows, which are unsuitable for agent-based simulations requiring explicit vehicle routes. We evaluate Incremental Loading (IL), a heuristic that generates discrete, congestion-aware routes in a single pass. AoN, FW, and IL are compared using over 130,000 vehicle trips on a city-scale road network with 354,067 nodes and 502,431 edges. IL reduces the Beckmann objective by 36% over AoN and requires only 33% of FW’s runtime pdfProductivity and Bottleneck Analysis through Fast Iterative Discrete-Event Simulation in Transformer Manufacturing Takumi Kato (Hitachi America, Ltd.; Hitachi, Ltd.); Zhi Hu (Hitachi America, Ltd.); and Sairam Thiagarajan (Hitachi Energy) Abstract AbstractThe manufacturing of medium and large power transformers is characterized by a high-mix, low-volume production model, with extensive customization across units. This complexity introduces significant challenges in shop-floor operations, particularly in personnel allocation and planning for material flow, WIP movement, and order prioritization. While conventional tools like spreadsheets and simple equations can support basic productivity and risk analysis, quantitatively assessing bottlenecks and evaluating countermeasures is still a significant challenge. Given the constraints on time and resources, accurately estimating the impact of interventions is crucial. However, building detailed factory models and running simulations to explore hypotheses about bottlenecks and their mitigation is often time-consuming, requiring extensive data collection and setup. To address this, we developed a lightweight industrial simulator based on the Rapid Modeling Architecture (RMA) that enables rapid exploration of current bottlenecks and potential countermeasures. This tool supported hypothesis testing and provided valuable insights to guide improvement efforts. pdfContrasting Trajectories: A Study of Out-Of-Hospital Cardiac Arrest Incidence and Community Response Ashish Kumar (Singapore Health Services, Duke-NUS Medical School); Fahad Javaid Siddiqui and Marcus Eng Hock Ong (Duke-NUS Medical School); and Sean Shao Wei Lam (Singapore Health Services, Duke-NUS Medical School) Abstract AbstractWith a rapidly ageing population, Singapore faces the challenge of managing rising Out-of-Hospital Cardiac Arrest (OHCA) incidence. Time is of the essence and Community First Responders (CFRs) are necessary in OHCA response. However, CFRs are also members of an ageing society; ageing also affects potential adoption of the CFR role. No public forecast is available either for OHCA incidence or CFR availability in Singapore. We estimated the age-specific incidence of OHCA in Singapore based on historical data from 2010 to 2021. We developed an Agent Based Model (ABM) to simulate the population of Singapore (stratified by age), the incidence of OHCA, and adoption of the CFR role until 2050. Preliminary results from the ABM show that even if CFR adoption follows a successful diffusion curve, there will not be enough CFRs relative to the number of OHCA cases. Our open-source ABM can be an aid for policymakers and researchers. pdfA Multi-Fidelity Approach To Integer-Ordered Simulation Optimization Using Gaussian Markov Random Fields Graham Burgess, Luke Rhodes-Leader, and Rob Shone (Lancaster University) and Dashi Singham (Naval Postgraduate School) Abstract AbstractWe are interested in discrete simulation optimization problems with a large solution space where decision variables are integer-ordered and models of both low and high fidelity are available to evaluate the objective function. We model the error of a low-fidelity deterministic model with respect to a high-fidelity stochastic simulation model using a Gaussian Markov random field. In a Bayesian optimization framework, using the low-fidelity model and the model of its error, we reduce the reliance on the expensive high-fidelity model. pdfA Digital Twin Framework for Integrated Airline Recovery Nirav Lad (Air Force Institute of Technology) Abstract AbstractThe airline industry provides a critical and connective transportation mode with global economic impact. These movements and connectivity are impaired by disruptions from aircraft maintenance, system delay, weather, and other sources. The integrated airline recovery problem mitigates significant disruptions to an airline’s planned schedule. Solution methods require airline operations and passenger itinerary data. Unfortunately, publicly availability data is scarce or incomplete. Moreover, these solutions can integrate into an airline digital-twin framework utilized an Airline Operations Control Center (AOCC). This study presents an airline data generation methodology and a digital-twin (DT) framework to enable airline recovery. pdfA GenAI-enhanced Framework for Agent-based Supply Chain Simulation: Integrating Unstructured Data with Strategic Decision Modelling Wei Nie, Fangrui Li, Naoum Tsolakis, and Mukesh Kumar (University of Cambridge) Abstract AbstractThis paper introduces a methodological framework integrating Generative Artificial Intelligence (GenAI) with agent-based simulation modelling for managing supply chain (SC) resilience. Traditional simulation approaches rely on structured datasets and verbal descriptions, limiting their ability to capture real-world SC dynamics driven by unstructured information streams. Our proposed framework addresses this limitation by developing a Retrieval-Augmented Generation (RAG) pipeline powered by Large Language Models (LLMs) that systematically extracts structured insights from unstructured sources, including policy documents, news archives, and market reports. The GenAI-enhanced framework improves simulation realism and reveals context-dependent strategic reserve effectiveness. We demonstrate the framework's effectiveness through India's lithium carbonate strategic reserve planning, modelling a multi-echelon SC spanning mining, processing, trading, port, manufacturing, and reserve management operations. pdf Poster, PhD ColloquiumPhD Colloquium Posters Session Chair: Eunhye Song (Georgia Institute of Technology), Alison Harper (University of Exeter, The Business School), Cristina Ruiz-Martín (Carleton University), Sara Shashaani (North Carolina State University) PosterPoster Track Posters Session Chair: Bernd Wurth (University of Glasgow)
Track Coordinator - Ph.D. Colloquium: Alison Harper (University of Exeter, The Business School), Cristina Ruiz-Martín (Carleton University), Sara Shashaani (North Carolina State University), Eunhye Song (Georgia Institute of Technology) PhD ColloquiumPhD Colloquium Session A1 Session Chair: Eunhye Song (Georgia Institute of Technology), Alison Harper (University of Exeter, The Business School) Evaluating required validity and granularity of digital twins for operational planning in roll-on roll-off terminals Teresa Marquardt (Christian-Albrechts-Universität zu Kiel) Program Track: PhD Colloquium Abstract AbstractDigital twins (DTs) are increasingly used in large container ports to support decision-making during operations. However, building and maintaining a full-scale DT involves significant effort and costs, which smaller roll-on roll-off terminals often cannot justify. This study asks how much granularity and accuracy a DT actually needs while providing meaningful decisions to the port. The focus lies on two planning tasks: minimizing vessel turnaround time through online scheduling and predicting departure times to support berth and shore power planning. The DT here mirrors a pseudo-analog twin and applies discrete event-based simulations based on data from the Port of Kiel (Germany). Several experiments test different levels of model validity and granularity. The DT’s performance is measured through accuracy in predicting departure time and the efficiency of engine preheating. pdfA Digital Twin Framework for Integrated Airline Recovery Nirav Lad (Air Force Institute of Technology) Program Track: PhD Colloquium Abstract AbstractThe airline industry provides a critical and connective transportation mode with global economic impact. These movements and connectivity are impaired by disruptions from aircraft maintenance, system delay, weather, and other sources. The integrated airline recovery problem mitigates significant disruptions to an airline’s planned schedule. Solution methods require airline operations and passenger itinerary data. Unfortunately, publicly availability data is scarce or incomplete. Moreover, these solutions can integrate into an airline digital-twin framework utilized an Airline Operations Control Center (AOCC). This study presents an airline data generation methodology and a digital-twin (DT) framework to enable airline recovery. pdfCold-Start Forecasting of New Product Life-Cycles via Conditional Diffusion Models Ruihan Zhou (Guanghua School of Management, Peking University) Program Track: PhD Colloquium Abstract AbstractAccurately forecasting the demand trajectories of newly launched products is a fundamental challenge due to sparse or nonexistent historical data. This paper introduces the Conditional Diffusion Life-cycle Forecaster (CDLF), a generative modeling framework designed for cold-start scenarios. CDLF integrates product attributes, reference trajectories of comparable products, and early post-launch sales signals into a unified conditional representation that guides a denoising diffusion process. By doing so, the framework produces realistic and uncertainty-aware forecasts even when sales data are absent or extremely limited. Empirical studies on the Intel microprocessor dataset show that CDLF outperforms classical diffusion models, Bayesian updating approaches, and state-of-the-art machine learning baselines. The results demonstrate that CDLF provides more accurate forecasts and calibrated uncertainty, highlighting its potential to support inventory planning and decision-making for new product launches. pdfPrior-data Fitted Networks for Mixed-variable Bayesian Optimization Timothy Shinners (Universität der Bundeswehr München) Program Track: PhD Colloquium Abstract AbstractBayesian optimization is an algorithm used for black-box optimization problems, such as parameter tuning for simulations. Prior-data fitted networks (PFNs) are transformers that are trained to behave similarly to Gaussian processes (GPs). They have been shown to perform well as surrogate functions in Bayesian optimization methods, offering performance capabilities similar to GPs with reduced computational expense. PFNs have not yet been applied to settings with a mixed-variable input space that involves both numerical and categorical variables. In this work, we train three new PFNs using existing mixed-variable GPs. We integrate them into mixed-variable Bayesian optimization (MVBO) methods and conduct experiments with six different black-box functions to assess their behavior. Our PFNs yield comparable quality of performance to that of their GP-based counterparts in MVBO settings, while operating at drastically reduced computational expense. pdfCoupling Multiple Scales of Agent-based Models: Superspreading and Effects of Transmission Chain Information Loss Sascha Korf (Forschungszentrum Juelich, German Aerospace Center (DLR)) Program Track: PhD Colloquium Abstract AbstractDuring emerging epidemics, surveillance systems typically detect aggregate case counts but lack detailed
transmission chain information (on time), forcing epidemic modelers to make assumptions about initializing
their models. This work quantifies how missing transmission chain information affects agent-based model
predictions. Our approach couples two agent-based models: Vadere (Rahn et al. 2024) generates detailed
superspreading scenarios, providing (synthetic) ground truth data for initialization comparisons, while the
MEmilio-ABM (Kerkmann et al. 2025; Bicker et al. 2025) conducts epidemic simulations on a larger scale
of districts. We investigate two scenarios representing restaurant and workplace outbreaks. We see that
initialization through a detailed micro-simulation creates substantial differences in epidemic trajectories over
10-day simulations in comparison to traditional uniform distributed initialization. Results demonstrate that
uniform initialization approaches systematically bias epidemic predictions, with differences up to 46.0%
in cumulative infections. This highlights the importance of transmission chain reconstruction in outbreak investigations. pdfRegular Tree Search for Simulation Optimization Du-Yi Wang (City University of Hong Kong, Renmin University of China) Program Track: PhD Colloquium Abstract AbstractTackling simulation optimization problems with non-convex objective functions remains a fundamental challenge in operations research. In this paper, we propose a class of random search algorithms, called Regular Tree Search, which integrates adaptive sampling with recursive partitioning of the search space. The algorithm concentrates simulations on increasingly promising regions by iteratively refining a tree structure. A tree search strategy guides sampling decisions, while partitioning is triggered when the number of samples in a leaf node exceeds a threshold that depends on its depth. Furthermore, a specific tree search strategy, Upper Confidence Bounds applied to Trees, is employed in the Regular Tree Search. We prove global convergence under sub-Gaussian noise, based on assumptions involving the optimality gap, without requiring continuity of the objective function. Numerical experiments confirm that the algorithm reliably identifies the global optimum and provides accurate estimates of its objective value. pdfQuantifying the Impact of Proactive Community Case Management on Severe Malaria Cases Using Agent-based Simulation Xingjian Wang (Georgia Institute of Technology) Program Track: PhD Colloquium Abstract AbstractMalaria remains a major global health threat, especially for children under five, causing hundreds of thousands of deaths annually. Proactive Community Case Management (ProCCM) is an intervention designed to enhance early malaria detection and treatment through routine household visits (sweeps), complementing existing control measures. ProCCM is crucial in areas with limited healthcare access and low treatment-seeking rates, but its effectiveness depends on transmission intensity and the coverage of existing interventions. To quantify the impact of ProCCM, we calibrated an agent-based simulation model for settings with seasonal transmission and existing interventions. We evaluated how different ProCCM scheduling strategies perform under varying treatment-seeking rates in reducing severe malaria cases. Our proposed heuristics—greedy and weighted—consistently outperformed a standardized, uniformly spaced approach, offering practical guidance for designing more effective and adaptive malaria control strategies. pdfEnhanced Derivative-Free Optimization Using Adaptive Correlation-Induced Finite Difference Estimators Guo Liang (Renmin University of China) Program Track: PhD Colloquium Abstract AbstractGradient-based methods are well-suited for derivative-free optimization (DFO), where finite-difference (FD) estimates are commonly used as gradient surrogates. Traditional stochastic approximation methods, such as Kiefer-Wolfowitz (KW) and simultaneous perturbation stochastic approximation (SPSA), typically utilize only two samples per iteration, resulting in imprecise gradient estimates and necessitating diminishing step sizes for convergence. In this paper, we combine a batch-based FD estimate and an adaptive sampling strategy, developing an algorithm designed to enhance DFO in terms of both gradient estimation efficiency and sample efficiency. Furthermore, we establish the consistency of our proposed algorithm and demonstrate that, despite using a batch of samples per iteration, it achieves the same sample complexity as the KW and SPSA methods. Additionally, we propose a novel stochastic line search technique to adaptively tune the step size in practice. Finally, comprehensive numerical experiments confirm the superior empirical performance of the proposed algorithm. pdfBayesian Procedures for Selecting Subsets of Acceptable Alternatives Jinbo Zhao (Texas A&M University) Program Track: PhD Colloquium Abstract AbstractWe develop a Bayesian framework for subset selection in ranking-and-selection problems that goes beyond the classical focus on identifying a single best alternative. The framework accommodates broad notions of acceptability, including delta-optimality, stochastically constrained optimality, and Pareto optimality. The central challenge is that evaluating posterior quantities involves integrating over high-dimensional and potentially non-convex acceptance regions and must be repeatedly computed for many candidate subsets. We circumvent this challenge by employing sample-average approximation to reformulate optimization problems that involve these integrals into mixed-integer programs. We also propose new myopic sequential sample allocation policies. pdf PhD ColloquiumPhD Colloquium Session B1 Session Chair: Sara Shashaani (North Carolina State University), Cristina Ruiz-Martín (Carleton University) Simulation-driven Reliability-aware Operation of Active Distribution Systems Gejia Zhang (Rutgers University) Program Track: PhD Colloquium Abstract AbstractWe embed decision- and context-dependent reliability directly into short-term operational decisions for active distribution systems, using simulation for empirical evaluation. Component failure probabilities are learned from operating conditions and ambient temperature using logistic models estimated with Bayesian sampling; rare-event scarcity is handled through weighted bootstrapping. These reliability models are then coupled with an AC power flow representation and solved by a sequential convex approach that iteratively linearizes the expected cost of energy not served. Relative to a cost-only dispatch, the reliability aware controller shifts battery and demand schedules, reduces currents on critical lines, and cuts expected unserved energy cost by more than $20\%$ with a modest increase in operating cost. pdfTwo-Stage Stochastic Multi-Objective Linear Programs: Properties and Algorithms Akshita Gupta (Purdue University) Program Track: PhD Colloquium Abstract AbstractConsider a two-stage stochastic multi-objective linear program (TSSMOLP) which is a natural generalization of the well-studied two-stage stochastic linear program allowing modelers to specify multiple objectives in each stage. The second-stage recourse decision is governed by an uncertain multi-objective linear program (MOLP) whose solution maps to an uncertain second-stage nondominated set. The TSSMOLP then comprises the objective, which is the Minkowski sum of a linear term plus the expected value of the second-stage nondominated set, and the constraints, which are linear. We propose a novel formulation for TSSMOLPs on a general probability space. Further, to aid theoretical analysis, we propose reformulations to the original problem and study their properties to derive the main result that the global Pareto set is cone-convex. Finally, we leverage cone-convexity to demonstrate that solving a TSSMOLP under a sample average approximation framework is equivalent to solving a large scale MOLP. pdfModular Full-head fNIRS Simulator for Hemodynamic Response Modeling Condell Eastmond (Rensselaer Polytechnic Institute) Program Track: PhD Colloquium Abstract AbstractFunctional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging modality that has shown promising results as a tool for clinical neuroimaging and brain-computer interfaces (BCI).
Due to the shortcomings of heuristic denoising and expert-dependent postprocessing, data-driven approaches (e.g., deep learning) to fNIRS analysis are becoming increasingly popular. To accommodate for the associated demand for large, labeled fNIRS datasets, we develop a modular 3D fNIRS simulator. Our simulator can generate spatiotemporal distributions of a variety of hemodynamic response functions (HRFs), allowing for high-fidelity modeling of experiment-specific cortical activity. We describe the model used for generating synthetic data, and compare several HRF basis functions used to model cortical hemodynamic activity. pdfEvaluating Liver Graft Acceptance Policies Using a Continuous-Time Markov Chain Simulation Framework Jiahui Luo (Dartmouth College) Program Track: PhD Colloquium Abstract AbstractLiver transplantation is the only curative treatment for patients with end-stage liver disease. The United Network for Organ Sharing operates the national liver transplant waiting list and allocates organs under a complex priority system based on medical urgency, geography, and waiting time. However, the limited availability of high-quality organs and variability in acceptance decisions continue to challenge the system. I develop a continuous-time Markov reward process simulation framework to evaluate liver offer acceptance practices in the United States. This simulation framework models organ arrivals and patients’ health progression as continuous-time processes and mimics how decisions are made in practice using a randomized policy. Results highlight the trade-offs between waiting for higher-quality organs and accepting earlier offers of lower quality. This framework provides insights and identifies areas for enhancing patient management and liver offer acceptance. pdfEfficient Distance Pruning for Process Suffix Comparison in Prescriptive Process Monitoring Sarra Madad (Troyes University of Technology, QAD) Program Track: PhD Colloquium Abstract AbstractPrescriptive process monitoring seeks to recommend actions that improve process outcomes by analyzing possible continuations of ongoing cases. A key obstacle is the heavy computational cost of large-scale suffix comparisons, which grows rapidly with log size. We propose an efficient retrieval method exploiting the triangle inequality: distances to a set of optimized pivots define bounds that prune redundant comparisons. This substantially reduces runtime and is fully parallelizable. Crucially, pruning is exact: the retrieved suffixes are identical to those from exhaustive comparison, thereby preserving accuracy. These results show that metric-based pruning can accelerate suffix comparison and support scalable prescriptive systems. pdfUncertainty-aware Digital Twin of Business Processes via Bayesian Calibration and Posterior-predictive Simulation Guilhem Nespoulous (QAD Inc., Université de Technologie de Troyes) Program Track: PhD Colloquium Abstract AbstractEvent logs are finite, partial views of a latent stochastic process. We present a Bayesian digital twin based on probabilistic, random-effects, event-by-event generators that utilize historical logs and propagate uncertainty. After calibration with Hamiltonian Monte Carlo, each posterior draw is a parameter vector that defines a complete simulator: using that we generate or continue event logs by sequentially sampling the next activity, the inter-event time, and new case arrivals conditional on history and crowding (congestion). Computing KPIs (cycle time, cost, directly-follows counts, etc.) on the simulated logs and aggregating over all posterior draws yields posterior-predictive KPI distributions. Validation compares these distributions to bootstrap baselines from the observed log using distributional distances. The result is a scenario-ready process twin that reports outcomes as distributions, enabling risk-aware decisions. pdfEvaluating Mdp-derived Personalized Strategies against Guideline-based Care for the Management of Small Renal Masses Wendy Qi (University of Virginia) Program Track: PhD Colloquium Abstract AbstractManagement of small renal masses (SRMs) requires balancing cancer control with preservation of kidney function, while accounting for competing causes of mortality. We developed a patient-level microsimulation model to evaluate health outcomes under alternative treatment strategies, including published guidelines and MDP-derived personalized policies. Patients were simulated with heterogeneous attributes—age, sex, tumor size, subtype, and chronic kidney disease (CKD) stage—followed in 6-month cycles starting from age 65, evolving probabilistically across health states. The model incorporated tumor growth, CKD progression, recurrence, metastasis, and mortality from cancer, CKD-related cardiovascular disease, and other causes. Outcomes included life expectancy, quality-adjusted life years (QALYs), and cause-specific mortality, highlighting trade-offs between cancer control and renal function preservation. Results showed that MDP-derived policies improved survival and QALYs compared with guideline-based care. This work demonstrates the value of microsimulation in supporting treatment planning for SRM patients under competing risks. pdfConstructing Confidence Intervals for Value-at-Risk via Nested Simulation Qianwen Zhu (City University of Hong Kong) Program Track: PhD Colloquium Abstract AbstractNested simulation is a powerful tool for estimating widely-used risk measures, such as Value-at-Risk (VaR). While point estimation of VaR has been extensively studied in the literature, the topic of interval estimation remains comparatively underexplored. In this paper, we present a novel nested simulation procedure for constructing confidence intervals (CIs) for VaR with statistical guarantees. The proposed procedure begins by generating a set of outer scenarios, followed by a screening process that retains only a small subset of scenarios likely to result in significant portfolio losses. For each of these retained scenarios, inner samples are drawn, and the minimum and maximum means from these scenarios are used to construct the CI. Theoretical analysis confirms the asymptotic coverage probability of the resulting CI, ensuring its reliability. Numerical experiments validate the method, demonstrating its high effectiveness in practice. pdfA Method to Derive Human Integration Requirements for Complex Systems Through Stochastic Modeling Michael Westenhaver (George Washington University) Program Track: PhD Colloquium Abstract AbstractAs the complexity of flight deck automation has grown over the past several decades, so too has the potential for operator confusion and decision-making errors in complex failure scenarios, a problem that is only expected to increase dramatically with the development of new forms of Advanced Air Mobility (AAM). These errors often stem from design gaps in the Human Machine Interface (HMI) in the face of unexpected emergent properties of the human-machine system. This research seeks to enhance system designers’ ability to cut through this complexity by proposing a new stochastic modeling and simulation method that models HMI design elements and human task analysis over a range of scenarios. Through this method, potential for latent errors can be identified early in the design process. The viability of the method is demonstrated through a proof-of-concept Simulink model, though further work is needed to validate predictions against real world data. pdfAn Agent-based Social-geospatial Model to Evaluate the Effectiveness of a Targeted Heat Warning System darya Abbasi (University of Texas at Arlington) Program Track: PhD Colloquium Abstract AbstractExtreme heat poses significant threats to vulnerable populations, and heat events now occur more frequently and are more severe and longer-lasting due to climate change. Research suggests that targeted building-specific heat warning systems could increase responsiveness to extreme heat events and save lives. To study this, an agent-based model integrating geospatial data with a dynamic social network was developed to simulate residents’ responses to targeted heat alerts. Agent decisions are influenced by the strength of their social ties, transportation access, and spatial constraints. Experimental results indicate the importance of leveraging strong social connections, but the effectiveness of a targeted heat warning system may be limited if the challenges posed by transportation barriers and inadequate cooling centers remain unaddressed. pdfSimulation-Based Multi-Agent Reinforcement Learning for Network Interdiction Games xudong wang (university of tennessee, knoxville) Program Track: PhD Colloquium Abstract AbstractNetwork interdiction problems capture adversarial interactions between a defender seeking to preserve flow in a network and an attacker aiming to disrupt it. Traditional approaches model this as a bilevel optimization problem, which quickly becomes intractable in large or dynamic networks. In this work, we investigate a simulation-based framework where both the defender and attacker are modeled as reinforcement learning (RL) agents. Using a fixed network topology, episodes of play simulate interdiction and defense actions, evaluate post-interdiction maximum flow, and provide rewards to each agent. The defender learns policies that maximize residual flow, while the attacker learns to minimize it, yielding a competitive zero-sum setting. The simulation demonstrates that both agents adaptively learn mixed strategies and converge toward a stable equilibrium distribution. pdf PhD ColloquiumPhD Colloquium Keynote: The Interplay Between Simulation Models, Statistical Models, and Data Systems Session Chair: Eunhye Song (Georgia Institute of Technology) The Interplay Between Simulation Models, Statistical Models, and Data Systems Peter Haas (University of Massachusetts) Program Track: PhD Colloquium Abstract AbstractTechniques for simulation modeling, statistical modeling (including machine learning), and data management have become increasingly intertwined, opening up rich possibilities for research and applications. We will survey a number of our research projects at the interface of these three areas. These include (i) use of data-integration techniques to create composite simulation models for complex systems-of-systems, (ii) use of machine learning to accelerate simulation-based optimization, and (iii) integrating simulations and information management systems to allow efficient simulation analysis close to the data. These projects also illustrate the many twists and turns of a professional career, and the importance of maintaining flexibility and curiosity. pdf PhD ColloquiumPhD Colloquium Session A2 Session Chair: Eunhye Song (Georgia Institute of Technology) Nested Denoising Diffusion Sampling for Global Optimization Yuhao Wang (Georgia Institute of Technology) Program Track: PhD Colloquium Abstract AbstractWe propose Nested Denoising Diffusion Sampling (NDDS), a novel method for global optimization of expensive black-box functions. NDDS leverages conditional denoising diffusion probabilistic models to approximate the evolving solution distribution, eliminating the need for extensive additional function evaluations. Unlike prior diffusion-based optimization methods that rely on heuristically chosen conditioning variables, NDDS systematically generates them using a statistically principled mechanism. Furthermore, we introduce a likelihood ratio–based data reweighting strategy to correct the mismatch between the empirical training distribution and the current target distribution. Numerical experiments on benchmark problems demonstrate that NDDS consistently outperforms the Extended Cross-Entropy method under the same evaluation budget, with notable efficiency gains in high-dimensional settings. pdfThe Human Gear Enabling Reliable Smart Simulation through Expert Interaction samira Khraiwesh (Technical University of Munich) Program Track: PhD Colloquium Abstract AbstractOrganizations face increasing pressure to optimize processes, control costs, and comply with changing regulations. Business Process Simulation (BPS) offers a way to test process changes in a controlled environment before implementation, providing insights that support strategic decision-making. Yet, despite its proven potential, BPS remains underutilized in practice. Even with recent advances in automated simulation based on event logs, real-world adoption is hindered by persistent challenges such as poor data quality, lack of contextual understanding, and insufficient trust in black-box models. This PhD project tackles these barriers by proposing a novel, Human-In-The-Loop (HITL) simulation framework that dynamically balances automation with expert involvement. By embedding human judgment at key stages, this research aims to deliver more accurate, explainable, and practically usable simulations. The outcome is a scalable approach that bridges the gap between simulation theory and industry reality. pdfStopping Rules for Sampling in Precision Medicine Mingrui Ding (City University of Hong Kong) Program Track: PhD Colloquium Abstract AbstractPrecision medicine (PM) aims to tailor treatments to patient profiles. In PM practice, treatment performance is typically evaluated through simulation models or clinical trials. Despite differences in sampling subjects and requirements, both rely on a sequential sampling process and require a stopping time to ensure, with prespecified confidence, the best treatment is correctly identified for each patient profile. We propose unified stopping rules for both settings by adapting the generalized likelihood ratio (GLR) test then calibrating it using mixture martingales with a peeling method. The rules are theoretically grounded and can be integrated with different types of sampling strategies. Their effectiveness are demonstrated in a case study. pdfOptimizing Stochastic Systems Using Streaming Simulation Robert James Lambert (Lancaster University) Program Track: PhD Colloquium Abstract AbstractSequential optimization of stochastic systems is an increasingly important area of research. With modern systems generating continuous streams of data, decision-making policies must be able to adapt in real time to incorporate new information as it arrives. My PhD research develops theory and methods for online sequential decision-making with streaming observations. In particular, it addresses the key challenges of providing convergence guarantees for simulation optimization procedures, whilst maintaining model accuracy within computational limits. Initial work focused on optimizing M/G/c queueing systems with unknown arrival processes, deriving asymptotic convergence results for a Monte Carlo-based algorithm. Subsequent research has extended these ideas to finite-time performance evaluation, decision-dependent observations, and adaptive policies guided by approximate dynamic programming principles. These results provide a foundation for designing adaptive, data-driven policies in complex stochastic systems, enabling more reliable real-time decision-making. pdfGeneral-purpose Ranking and Selection for Stochastic Simulation with Streaming Input Data Jaime Gonzalez-Hodar (Georgia Institute of Technology) Program Track: PhD Colloquium Abstract AbstractWe study ranking and selection (R&S) where the simulator's input models are more precisely estimated from the streaming data obtained from the system. The goal is to decide when to stop updating the model and return the estimated optimum with a probability of good selection (PGS) guarantee. We extend the general-purpose R&S procedure by Lee and Nelson by integrating a metamodel that represents the input uncertainty effect on the simulation output performance measure. The algorithm stops when the estimated PGS is no less than 1-α accounting for both prediction error in the metamodel and input uncertainty. We then propose an alternative procedure that terminates significantly earlier while still providing the same (approximate) PGS guarantee by allowing the performance measures of inferior solutions to be estimated with lower precision than those of good solutions. Both algorithms can accommodate nonparametric input models and/or performance measures other than the means (e.g., quantiles). pdfImproving Plan Stability in Semiconductor Master Planning through Stochastic Optimization and Simulation Eric Thijs Weijers (NXP Semiconductors N.V., Eindhoven University of Technology) Program Track: PhD Colloquium Abstract AbstractThis research investigates how stochastic optimization and simulation can reduce plan nervousness in semiconductor supply chain master planning. Due to both demand and supply uncertainty, plans must be periodically optimized. Traditional linear programming models often yield unstable plans due to their sensitivity to input changes. We propose a two-stage stochastic programming model to improve plan stability and an aggregated simulation framework to evaluate the performance of generated plans. The two-stage stochastic programming model incorporates demand uncertainty through scenario-based optimization. The simulation framework is used to assess key performance indicators such as on-time delivery and inventory position under rolling horizon conditions. Using real-world data from NXP Semiconductors, we demonstrate that two-stage stochastic programming improves plan stability compared to linear programming, while maintaining comparable on-time delivery and inventory performance. These findings suggest that stochastic optimization and simulation can enhance the robustness of semiconductor supply chain planning. pdfSimulation-based Analysis of Slot Communication Strategies for Outpatient Clinics Aparna Venkataraman (University of Queensland, Indian Institute of Technology Delhi) Program Track: PhD Colloquium Abstract AbstractOperational performance in clinics serving scheduled and walk-in patients depends not only on the scheduling policy but also on the “communicated slot”–the appointment time conveyed to patients. This study uses discrete-event simulation to evaluate the impact of different slot communication strategies on performance within a fixed-slot, split-pool appointment system. Six scenarios are analyzed by constructing communicated slots by offsetting the assigned slot by the mean patient arrival deviation and rounding to the nearest five or ten minutes. The analysis isolates the effects of these adjustments on performance metrics like scheduled and walk-in waiting time, length of stay, server utilization, and overtime. Results indicate that communicating the exact assigned slot yields the most balanced performance. Incorporating arrival offsets, especially with ten-minute rounding at high utilization, significantly degrades performance by increasing patient delays. However, five-minute rounding with or without an offset presents a viable alternative when rounding is an operational requirement. pdfImportance Sampling for Latent Dirichlet Allocation Ayeong Lee (Columbia University) Program Track: PhD Colloquium Abstract AbstractLatent Dirichlet Allocation (LDA) is a method for finding topics in text data. Evaluating an LDA model entails estimating the expected likelihood of held-out documents. This is commonly done through Monte Carlo simulation, which is prone to high relative variance. We propose an importance sampling estimator for this problem and characterize the theoretical asymptotic statistical efficiency it achieves in large documents. We illustrate the method in simulated data and in a dataset of news articles. pdfThe Derivative-Free Fully-Corrective Frank-Wolfe Algorithm for Optimizing Functionals Over Probability Spaces Di Yu (Purdue University) Program Track: PhD Colloquium Abstract AbstractThe challenge of optimizing a smooth convex functional over probability spaces is highly relevant in experimental design, emergency response, variations of the problem of moments, etc. A viable and provably efficient solver is the fully-corrective Frank-Wolfe (FCFW) algorithm. We propose an FCFW recursion that rigorously handles the zero-order setting, where the derivative of the objective is known to exist, but only the objective is observable. Central to our proposal is an estimator for the objective's influence function, which gives, roughly speaking, the directional derivative of the objective function in the direction of point mass probability distributions, constructed via a combination of Monte Carlo, and a projection onto the orthonormal expansion of an L_2 function on a compact set. A bias-variance analysis of the influence function estimator guides step size and Monte Carlo sample size choice, and helps characterize the recursive rate behavior on smooth non-convex problems. pdfASTROMoRF: An Adaptive Sampling Trust-Region Algorithm with Dimensionality Reduction for Large-Scale Simulation Optimization Benjamin Rees (University of Southampton) Program Track: PhD Colloquium Abstract AbstractHigh-dimensional simulation optimization problems have become prevalent in recent years. In practice, the objective function is typically influenced by a lower-dimensional combination of the original decision variables, and implementing dimensionality reduction can improve the efficiency of the optimization algorithm. In this extended abstract, we introduce a novel algorithm ASTROMoRF that combines adaptive sampling with dimensionality reduction, using an iterative trust-region approach. Within a trust-region algorithm, a series of surrogates are built to estimate the objective function. Using a lower-dimensional subspace reduces the number of design points needed for building a surrogate within each trust-region and consequently reduces the number of simulation replications. We introduce this algorithm and comment on its finite-time performance against other state-of-the-art solvers. pdfDynamic Calibration of Digital Twin via Stochastic Simulation: A Wind Energy Case Study Yongseok Jeon (North Carolina State University) Program Track: PhD Colloquium Abstract AbstractWe present a framework to dynamically calibrate a digital twin to support decision-making in systems operating under uncertainty. The framework integrates nested input models that accommodate nonstationarity in independent and response variables with a physics-based model to capture evolving system dynamics with a stochastic simulation. Calibration is formulated as a simulation-optimization problem with an evolving feasible region at each stage to maintain temporal dependence within the calibration parameter. We apply our previously established root-finding strategy to solve this problem with a Gaussian metamodel. As a case study, we apply the framework to forecast the short-term power deficit, known as the wake effect, in wind farms using real-world data and demonstrate the robustness of the proposed framework. Besides advancing the digital twin research, the presented methodology is expected to advance wind farm wake steering strategy by enabling accurate short-term wake effect prediction. pdf Poster, PhD ColloquiumPhD Colloquium Posters Session Chair: Eunhye Song (Georgia Institute of Technology), Alison Harper (University of Exeter, The Business School), Cristina Ruiz-Martín (Carleton University), Sara Shashaani (North Carolina State University)
SimOpt Workshop Session Chair: David J. Eckman (Texas A&M University), Sara Shashaani (North Carolina State University), Shane G. Henderson (Cornell University) SimOpt Workshop (Pre-registration and Additional Fee Required) David J. Eckman (Texas A&M University), Shane G. Henderson (Cornell University), and Sara Shashaani (North Carolina State University) Abstract AbstractThe SimOpt Workshop intended for researchers and advanced practitioners who are comfortable working with Python code, introduces SimOpt – an open-source Python library of simulation models and optimization algorithms and benchmarking platform. The workshop provides guidance on how to interact with the library. You will learn how to run multiple solvers on multiple problems and generate a range of diagnostic plots, such as the one below, that shed light on the relative performance of solvers, all with minimal effort. You will also learn how to build your own problems and solvers and use them within the platform to run simulation optimization experiments. The workshop also covers new SimOpt capabilities, such as data farming. Data Farming Workshop Session Chair: Paul Sanchez (Naval Postgraduate School), Susan M. Sanchez (Naval Postgraduate School) Data Farming Workshop (Pre-registration and Additional Fee Required) Susan M. Sanchez and Paul Sanchez (Naval Postgraduate School) Abstract AbstractThe pre-conference Data Farming 101 Workshop is designed for newcomers to simulation experiments. Data farming is the process of using computational experiments to grow data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. The focus of the workshop will be on gaining practical experience with setting up and running a simulation experiment. Participants will be introduced to important concepts and jointly explore simulation models in an interactive setting. Demonstrations and written materials will supplement guided, hands-on activities through the experiment set up, design, data collection, and analyses phases. Simulation 101 Workshop Session Chair: Raghu Pasupathy (Purdue University), Barry Lawson (Bates College), Lawrence M. Leemis (College of William and Mary) Sim 101 Workshop (Pre-registration and Additional Fee Required) Barry Lawson (Bates College), Lawrence M. Leemis (College of William and Mary), and Raghu Pasupathy (Purdue University) Abstract AbstractThis workshop, designed for newcomers, will cover Monte Carlo and discrete-event simulation. Participants will run and modify existing simulation programs downloaded prior to the workshop. An R package named simEd will be used in the workshop. Simulation Challenge Presentations I Session Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics), Martino Luis (University of Exeter) Simulation Challenge Presentations II Session Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics), Martino Luis (University of Exeter)
|