Plenary WSC@Singapore: Reimagine Tomorrow Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Kenneth Lim (Maritime and Port Authority of Singapore), Ngien Hoon Ping (SMRT Corporation), Ong Kim Pong (PSA International), and Enver Yücesan (INSEAD) Abstract Abstract In spite of a complete lack of natural resources, Singapore is a global economic powerhouse as the second-richest country in the world in terms of purchasing power parity. A hub for maritime, logistics, advanced manufacturing, and financial services, the secret to its success is its insatiable appetite for re-invention: “Singapore Reimagined” as part of the Smart Nation initiative is its latest re-invention project. In this panel, we will hear from the leaders of some of the key Singaporean industries about their contributions to this ambitious national project and the role played by simulation in enabling their contributions. Plenary Titans of Simulation: Michael Fu Chair: Ek Peng Chew (National University of Singapore, Centre for Next Generation Logistics) Stochastic Gradients: From Single Sample Paths to Conditional Monte Carlo to Machine Learning Michael Fu (University of Maryland) Abstract Abstract The speaker will make a humble attempt to convey one perspective of his journey through nearly four decades of simulation methodology research focused on stochastic gradient estimation and simulation optimization, highlighting technical advances and some real-world success stories (mainly of others) and illustrating several techniques using simple examples, sprinkled in with personal anecdotes, some historical context, and random ramblings. Perhaps by the end of the talk, with the guidance of the audience, potential future sample paths can be envisioned or reimagined. Titan_Day 1 Screen View.mp4 from INFORMS on Vimeo. Plenary Titans of Simulation: Leon McGinnis Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Reimagining Simulation in Discrete-Event Logistics Systems Leon McGinnis (Georgia Institute of Technology) Abstract Abstract Over fifty years ago, when McGinnis was coding simulation models in Fortran, he imagined that would be how he would always do it. Thankfully, smarter people imagined a different and better way of creating simulation models. Today, the challenge of applying simulation in large-scale, highly automated discrete-event logistics systems to create usable and effective digital twins is more than making the coding process faster and cheaper. It requires re-imagining the relationship between the real system and the simulation of it and re-imagining the relationship between the system stakeholders and the simulationists. 2022_WSC_Titan_Of_Simulation from INFORMS on Vimeo. Track Coordinator - Analysis Methodology: David J. Eckman (Texas A&M University), Jun Luo (Shanghai Jiao Tong University), Wei Xie (Northeastern University) Analysis Methodology Uncertainty Quantification Chair: Raghu Pasupathy (Purdue University) Empirical Uniform Bounds for Heteroscedastic Metamodeling Yutong Zhang and Xi Chen (Virginia Tech) Abstract Abstract This paper proposes pointwise variance estimation-based and metamodel-based empirical uniform bounds for heteroscedastic metamodeling based on the state-of-the-art nominal uniform bound available from the literature by considering the impact of noise variance estimation. Numerical results show that the existing nominal uniform bound requires a relatively large number of design points and a high number of replications to achieve a prescribed target coverage level. On the other hand, the metamodel-based empirical bound outperforms the nominal bound and other competing bounds in terms of empirical simultaneous coverage probability and bound width, especially when the simulation budget is small. However, the pointwise variance estimation-based empirical bound is relatively conservative due to its larger width. When the budget is sufficiently large so that the impact of heteroscedasticity is low, both empirical bounds' performance approaches that of the nominal bound. Estimating Confidence Regions for Distortion Risk Measures and Their Gradients Lei Lei (Chongqing University), Christos Alexopoulos (Georgia Tech Research Institute), Yijie Peng (Peking University), and James R. Wilson (North Carolina State University) Abstract Abstract This article constructs confidence regions (CRs) of distortion risk measures and their gradients at different risk levels based on replicate samples obtained from finite-horizon simulations. The CRs are constructed by batching and sectioning methods which partition the sample into nonoverlapping batches. Preliminary numerical results show that the estimated coverage rates of the CRs constructed are close to the nominal values. Overlapping Batch Confidence Regions on the Steady-State Quantile Vector Raghu Pasupathy (Purdue University), Dashi I. Singham (Naval Postgraduate School), and Yingchieh Yeh (National Central University) Abstract Abstract The ability to use sample data to generate confidence regions on quantiles is of recent interest. In particular, developing confidence regions for multiple quantile values provides deeper information about the distribution of underlying output data that may exhibit serial dependence. This paper presents a cancellation method that employs overlapping batch quantile estimators to generate confidence regions. Our main theorem characterizes the weak limit of the statistic used in constructing such confidence regions, showing in particular that the derived weak limit deviates from the classical multivariate Student's $t$ and the normal distributions depending on the number of batches and the extent of their overlap. We present limited numerical results comparing the effect of fully overlapping versus non-overlapping batches to explore the tradeoff between coverage probability, confidence region volume, and computational effort. Analysis Methodology Metamodels and Optimization Chair: Mina Jiang (Arizona State University) Robust Simulation Design for Generalized Linear Models in Conditions of Heteroscedasticity or Correlation Andrew Gill (Australian Department of Defence), David Warne (Queensland University of Technology), Antony Overstall (University of Southampton), and Clare McGrory and James McGree (Queensland University of Technology) Abstract Abstract A meta-model of the input-output data of a computationally expensive simulation is often employed for prediction, optimization, or sensitivity analysis purposes. Fitting is enabled by a designed experiment, and for computationally expensive simulations, the design efficiency is of importance. Heteroscedasticity in simulation output is common, and it is potentially beneficial to induce dependence through the reuse of pseudo-random number streams to reduce the variance of the meta-model parameter estimators. In this paper, we develop a computational approach to robust design for computer experiments without the need to assume independence or identical distribution of errors. Through explicit inclusion of the variance or correlation structures into the meta-model distribution, either maximum likelihood estimation or generalized estimating equations can be employed to obtain an appropriate Fisher information matrix. Robust designs can then be computationally sought which maximize some relevant summary measure of this matrix, averaged across a prior distribution of any unknown parameters. Gaussian Processes For High-dimensional, Large Data Sets: A Review. Mengrui (Mina) Jiang and Giulia Pedrielli (Arizona State University) and Szu Hui Ng (National University of Singapore) Abstract Abstract Gaussian processes, known to have versatile uses in several fields across engineering, science, economics, show important advantages to several alternative approaches while controlling model complexity. However, the use of this family of models is hindered for inputs that are high dimensional as well as large sample sizes due to the intractability of the likelihood function, and the growth of the variance covariance matrix. This article investigates state-of-art solutions to these challenges according classifying them into categories. The goal is to select several algorithms covering each category and perform empirical experiments to compare their performances on the same set of test functions. Our preliminary results focus on deterministic implementations of a set of selected approaches. The results of the experiments may serve as a guidance to future readers who want to study and use Gaussian process in problems with high dimensions and big data sets. Sample Average Approximation over Function Spaces: Statistical Consistency and Rate of Convergence Zihe Zhou, Harsha Honnappa, and Raghu Pasupathy (Purdue University) Abstract Abstract This paper considers sample average approximation (SAA) of a general class of stochastic optimization problems over a function space constraint set and driven by ``regulated'' Gaussian processes. We establish statistical consistency by proving equiconvergence of the SAA estimator via a sophisticated sample complexity result. Next, recognizing that implementation over such infinite-dimensional spaces is possible only if numerical optimization is performed over a finite-dimensional subspace of the constraint set, and if sample paths of the driving process can be generated over a finite grid, we identify the decay rate of the SAA estimator's expected optimality gap as a function of the optimization error, Monte Carlo sampling error, path generation approximation error, and subspace projection error. Analysis Methodology Quantile Estimation Chair: Drupad Parmar (Lancaster University) A Sequential Method for Estimating Steady-State Quantiles Using Standardized Time Series Athanasios Lolos, Joseph Haden Boone, Christos Alexopoulos, and David Goldsman (Georgia Institute of Technology); Kemal Dinçer Dingeç (Gebze Technical University); Anup C Mokashi (Memorial Sloan Kettering Cancer Center); and James R. Wilson (North Carolina State University) Abstract Abstract We propose SQSTS, an automated sequential procedure for computing confidence intervals (CIs) for steady-state quantiles based on Standardized Time Series (STS) processes computed from sample quantiles. We estimate the variance parameter associated with a given quantile estimator using the order statistics of the full sample and a combination of variance-parameter estimators based on the theoretical framework developed by Alexopoulos et al. in 2022. SQSTS is structurally less complicated than its main competitors, the Sequest and Sequem methods developed by Alexopoulos et al. in 2019 and 2017. Preliminary experimentation with the customer delay process prior to service in a congested M/M/1 queueing system revealed that SQSTS performed favorably compared with Sequest and Sequem in terms of the estimated CI coverage probability, and it significantly outperformed the latter methods with regard to average sample-size requirements. Tail Quantile Estimation for Non-preemptive Priority Queues Jin Guang (The Chinese University of Hong Kong, Shenzhen; Shenzhen Research Institute of Big Data); Guiyu Hong and Xinyun Chen (The Chinese University of Hong Kong, Shenzhen); and Xi Peng, Li Chen, Bo Bai, and Gong Zhang (Huawei Technologies, Co., Ltd.) Abstract Abstract Motivated by applications in computing and telecommunication systems, we investigate the problem of estimating p-quantile of steady-state sojourn times in a single-server multi-class queueing system with non-preemptive priorities for p close to 1. The main challenge in this problem lies in efficient sampling from the tail event. To address this issue, we develop a regenerative simulation algorithm with importance sampling. In addition, we establish a central limit theorem for the estimator to construct the confidence interval. Numerical experiments show that our algorithm outperforms benchmark simulation methods. Our result contributes to the literature on rare event simulation for queueing systems. Input Uncertainty Quantification for Quantiles Drupad Parmar and Lucy Morgan (Lancaster University), Susan Sanchez (Naval Postgraduate School), and Andrew Titman and Richard Williams (Lancaster University) Abstract Abstract Input models that drive stochastic simulations are often estimated from real-world samples of data. This leads to uncertainty in the input models that propagates through to the simulation outputs. Input uncertainty typically refers to the variance of the output performance measure due to the estimated input models. Many methods exist for quantifying input uncertainty when the performance measure is the sample mean of the simulation outputs, however quantiles that are frequently used to evaluate simulation output risk cannot be incorporated into this framework. Here we adapt two input uncertainty quantification techniques for when the performance measure is a quantile of the simulation outputs rather than the sample mean. We implement the methods on two examples and show that both methods accurately estimate an analytical approximation of the true value of input uncertainty. Analysis Methodology Estimating Densities and Rare Events Chair: Bruno Tuffin (Inria, University of Rennes) Likelihood Ratio Density Estimation for Simulation Models Florian Puchhammer (University of Waterloo) and Pierre L'Ecuyer (Université de Montréal) Abstract Abstract We consider the problem of estimating the density of a random variable X which is the output of a simulation model. We show how an unbiased density estimator can be constructed via the classical likelihood ratio derivative estimation method proposed over 35 years ago by Glynn, Rubinstein, and others. We then extend this density estimation method to cover situations where it does not apply directly. What we obtain is closely related to the generalized likelihood ratio method proposed recently by Peng and his co-authors, although the assumptions differ. We compare the methods and assumptions on some examples. Density Estimators of the Cumulative Reward up to a Hitting Time to a Rarely Visited Set of a Regenerative System Best Contributed Theoretical Paper - Finalist Marvin K. Nakayama (New Jersey Institute of Technology) and Bruno Tuffin (Inria, University of Rennes) Abstract Abstract For a regenerative process, we propose various estimators of the density function of the cumulative reward up to hitting a rarely visited set of states. The approaches exploit existing weak-convergence results for the hitting-time distribution, and we apply simulation to estimate parameters of the limiting distribution. We also combine these ideas with kernel methods. Numerical results from simulation experiments show the effectiveness of the estimators. Rare-Event Simulation without Variance Reduction: An Extreme Value Theory Approach Yuanlu Bai (Columbia University), Sebastian Engelke (University of Geneva), and Henry Lam (Columbia University) Abstract Abstract In estimating probabilities of rare events, crude Monte Carlo (MC) simulation is inefficient which motivates the use of variance reduction techniques. However, these latter schemes rely heavily on delicate analyses of underlying simulation models, which are not always easy or even possible. We propose the use of extreme value analysis, in particular the peak-over-threshold (POT) method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude MC samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that our POT estimator appears more accurate than crude MC and, while crude MC can easily give a trivial probability estimate 0, POT outputs a non-trivial estimate with a roughly correct magnitude. Therefore, in the absence of efficient variance reduction schemes, POT appears to offer potential benefits to enhance crude MC estimates. Analysis Methodology Random Processes and Optimization Chair: Wei Xie (Northeastern University); Zhengchang Hua (Southern University of Science and Technology, University of Leeds) Efficient Rare Event Estimation For Maxima of Branching Random Walks Michael Conroy (University of Arizona) and Mariana Olvera-Cravioto (University of North Carolina) Abstract Abstract We present a hybrid importance sampling estimator that is strongly efficient for tail probabilities of the all-time maximum of a branching random walk, where the increments satisfy a Cramer-Lundberg condition. The estimator uses conditional Monte Carlo in combination with the population dynamics algorithm to compute an expression for the tail of the distribution obtained from a spine change of measure. It has computational complexity (measured by the number of input random vectors required) that is independent of the offspring distribution, allowing for fast computation even when the mean number of offspring is very large. We remark on consistency of this estimator and give numerical examples. A Classification Method for Ranking and Selection with Covariates Gregory Keslin, Barry L. Nelson, and Matthew Plumlee (Northwestern University); Bernardo Pagnoncelli (SKEMA Business School); and Hamed Rahimian (Clemson University) Abstract Abstract Ranking & selection (R\&S) procedures are simulation-optimization algorithms for making one-time decisions among a finite set of alternative system or feasible solutions with a statistical assurance of a good selection. R&S with covariates (R&S+C) extends the paradigm to allow the optimal selection to depend on contextual information that is obtained just prior to the need for a decision. The dominant approach for solving such problems is to employ offline simulation to create metamodels that predict the performance of each system as a function of the covariate. This paper introduces a fundamentally different approach that solves individual R&S problems offline for various values of the covariate, and then treats the real-time decision as a classification problem: given the covariate information, which system is a good solution? Our approach exploits the availability of efficient R&S procedures, requires milder assumptions than the metamodeling paradigm to provide strong guarantees, and can be more efficient. Variance Reduction based Partial Trajectory Reuse to Accelerate Policy Gradient Optimization Hua Zheng and Wei Xie (Northeastern University) Abstract Abstract Built on our previous study on green simulation assisted policy gradient (GS-PG) focusing on trajectory-based reuse, in this paper, we consider infinite-horizon Markov Decision Processes (MDP) and create a new importance sampling based policy gradient optimization approach to support dynamic decision making. The proposed approach can selectively reuse the most related partial trajectories, i.e., the reuse unit is based on per-step or per-decision historical observations. In specific, we create a mixture likelihood ratio (MLR) based policy gradient optimization that can leverage the information from historical state-action transitions generated under different behavioral policies. The proposed variance reduction experience replay (VRER) approach can intelligently select and reuse most relevant transition observations, improve the policy gradient estimation, and accelerate the learning of optimal policy. Our empirical study demonstrates that it can improve optimization convergence and enhance the performance of state-of-the-art policy optimization approaches such as actor-critic method and proximal policy optimizations. Track Coordinator - Advanced Tutorials: Wai Kin (Victor) Chan (Tsinghua-Berkeley Shenzhen Institute, TBSI), Hong Wan (North Carolina State University) Advanced Tutorials Distributed Agent-based Simulation with Repast4Py Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Nicholson Collier and Jonathan Ozik (Argonne National Laboratory) Abstract Abstract The increasing availability of high-performance computing (HPC) has accelerated the potential for applying computational simulation to capture ever more granular features of large, complex systems. This tutorial presents Repast4Py, the newest member of the Repast Suite of agent-based modeling toolkits. Repast4Py is a Python agent-based modeling framework that provides the ability to build large, MPI-distributed agent-based models (ABM) that span multiple processing cores. Simplifying the process of constructing large-scale ABMs, Repast4Py is designed to provide an easier on-ramp for researchers from diverse scientific communities to apply distributed ABM methods. We will present key Repast4Py components and how they are combined to create distributed simulations of different types, building on three example models that implement seven common distributed ABM use cases. We seek to illustrate the relationship between model structure and performance considerations, providing guidance on how to leverage Repast4Py features to develop well designed and performant distributed ABMs. Advanced Tutorials Let's do Ranking & Selection Chair: Hong Wan (North Carolina State University) Best Invited Applied Paper - Finalist Barry Nelson (Northwestern University) Abstract Abstract Many tutorials and survey papers have been written on ranking & selection (R&S) because it is such a useful tool for simulation optimization when the number of feasible solutions or "systems" is small enough that all of them can be simulated. Cheap, ubiquitous, parallel computing has greatly increased the "all of them can be simulated" limit. Naturally these tutorials and surveys have focused on the underlying theory of R&S and have provided pseudocode procedures. This tutorial, by contrast, emphasizes applications, programming and interpretation of R&S, using the R programming language for illustration. Readers (and the audience) can download the code and follow along with the examples, but no experience with R is needed. Advanced Tutorials Hybrid Simulation Modeling Formalism via O²DES Framework for Mega Container Terminals Chair: Michael Kuhl (Rochester Institute of Technology) Haobin Li, Xinhu Cao, Ek Peng Chew, Kok Choon Tan, Kaustav Kundu, and Hongdan Chen (National University of Singapore) Abstract Abstract This paper briefly introduces the hybrid simulation modeling formalism via O²DES Framework (object-oriented discrete event simulation), and its application in the modeling of mega container ports. From an object-oriented perspective, this paper first lists the entities involved in container ports and the relationships among them; based on it, the "event-based" modeling method is illustrated for its necessity in accurately describing the rules of discrete-event systems, with the case of quay cranes and AGVs interacting with handshakes; then, the hierarchical structure of the model is segmented with a "state-based" modular approach to simplify the process of modeling, as well as the maintenance and reusability of model library; finally, through the "activity-based" perspective, it provides an macro-level overview on the dynamics of the flowing entities in the system. This paper elaborates the connection and cooperation between multiple methods in the hybrid formalism, hoping to provide guidance for future modeling effort. Advanced Tutorials EMS Operations Management: Simulation, Optimization, and New Service Models Chair: Wentong Cai (Nanyang Technological University) Nan Kong, Juan C. Paz, and Xiaoquan Gao (Purdue University) Abstract Abstract EMS is critical to the entire health care industry. In this tutorial, we provide a glimpse of significant research achievements in EMS operations management over the last decades. We focus on simulation modeling and their use in real-time ambulance dispatching, routing (ED selection), and redeployment decisions. We introduce optimization-based studies on ambulance management policies that have gained significant attention over the recent decade. We next describe our recent studies that optimize two emerging service models with the potential of revolutionizing EMS delivery, especially in areas with poor EMS access. Lastly, we describe prominent challenges at present, offer reflections on ongoing work, and outline future research. Advanced Tutorials From Discovery to Production: Challenges and Novel Methodologies for Next Generation Biomanufacturing Chair: Nan Kong (Purdue University) Xie Wei (Northeastern University) and Giulia Pedrielli (Arizona State University) Abstract Abstract The increasing demand of novel personalized drugs has put a remarkable pressure on the traditionally long time required from design to production of new products. Practitioners are moving away from the classical paradigm of large-scale batch production to continuous biomanufacturing with flexible and modular design, which is further supported by the recent technology advance in single-use equipment. In contrast to long design processes and low product variability, modern pharma players are answering the question: can we bring design and process control up to the speed that novel production technologies give us to quickly set up a flexible production run? In this tutorial, we present key challenges and potential solutions from the world of operations research that can support answering such question. We first present technical challenges and novel methods for the design of next generation drugs, followed by the process modeling and control approaches to successfully and efficiently manufacture them. Advanced Tutorials A Tutorial on How to Set Up a System Dynamic Simulation on the Example of the Covid-19 Pandemic Chair: Jonathan Ozik (Argonne National Laboratory) Stina Dellas, Abdelgafar Ismail Mohammed Hamed, Hans Ehm, and Anna Hartwick (Infineon Technologies AG) Abstract Abstract The Covid-19 virus has substantially transformed many aspects of life, impacted industries, and revolutionized supply chains all over the world. System dynamics modeling, which incorporates systems thinking to understand and map complex events as well as correlations, can aid in predicting future outcomes of the pandemic and generate key learnings. As system dynamic modeling allows for a deeper understanding of the manifestation and dynamics of disease, it was helpful when examining the implications of the pandemic on the supply chain of semiconductor companies. This tutorial describes how the system dynamics simulation model was constructed for the Covid-19 pandemic using AnyLogic Software. The model serves as a general foundation for further epidemiological simulations and system dynamics modeling. Advanced Tutorials Advanced Tutorial: Methods for Scalable Discrete Simulation Chair: Abdelgafar Hamed (Infineon Technologies AG) Philipp Andelfinger (University of Rostock) and Wentong Cai (Nanyang Technological University) Abstract Abstract Discrete simulation is an indispensable approach to investigate systems whose complexity prohibits analytical modeling and for which real-world experimentation is costly or dangerous. To keep pace with system models of increasing levels of detail and scope, a simulator's ability to make full use of the available hardware, typically through parallel and distributed simulation, is a vital concern. Building on well-studied synchronization algorithms, the field's focus has shifted towards aspects such as the avoidance of redundant computations in ensemble studies and the exploitation of heterogeneous hardware platforms. In this tutorial, we describe the fundamental notions of parallel and distributed simulation and summarize the main classes of synchronization algorithms as well their use when applied under the constraints of the domains of transportation and spiking neural networks. Current research directions and challenges are discussed in light of the tension between efficiency through specialization and wide applicability through generalization. Advanced Tutorials Blockchain: a Review from the Perspective of Operations Research Chair: Wei Xie (Northeastern University) Hong Wan, Yining Huang, and Kejun Li (North Carolina State University) Abstract Abstract Blockchain is a distributed, append-only digital ledger (database). The technology has caught much attention since the emergence of cryptocurrency, and there is an increasing number of blockchain applications in a wide variety of businesses. The concept, however, is still novel to many members of the simulation and operations research community. In this tutorial, we introduce the blockchain technology and review its frontier operations-and-data-related research. There are exciting opportunities for researchers in simulation, system analysis, and data science. Track Coordinator - Agent-Based Simulation: Chris Kuhlman (University of Virginia), Bhakti Stephan Onggo (University of Southampton) Agent-based Simulation, Logistics, Supply Chains, Transportation Organization of Transport Systems Chair: Michael Kuhl (Rochester Institute of Technology) Designing Mixed-Fleet of Electric and Autonomous Vehicles for Home Grocery Delivery Operation: An Agent-Based Modelling Study Dhanan Sarwo Utomo, Adam Gripton, and Philip Greening (Heriot-Watt University) Abstract Abstract This paper proposes a hypothetical agent-based model of home grocery delivery operation using electric and autonomous vehicles. In the last-mile delivery context, agent-based modelling studies that consider the use of autonomous vehicles is lacking. The model in this paper can produce a mixed-fleet design that can serve a set of synthetic orders punctually. Through extensive computer experimentation, firstly, we investigate how infrastructure setup affects the fleet design. Secondly, we highlight the benefits of mixed-fleet over homogeneous fleet design. And thirdly, we evaluate the benefits of using autonomous vehicles in last-mile delivery operation. Optimal Fleet Policy of Rental Vehicles with Relocation in New Zealand: Agent-based Simulation John-Carlo Favier, Subhamoy Ganguly, and Timofey Shalpegin (University of Auckland) Abstract Abstract An important strategic decision for rental car operators is whether to implement a single-fleet or multi-fleet model. The single-fleet model allows the movement of vehicles between regions, whereas the multi-fleet model does not. In practice, different rental car operators use different models. To address this problem, we have developed two simulation models and compared them in terms of fleet utilization, branch service level, relocations, and, ultimately, operating profit. We have taken the New Zealand rental car industry as an example as the country consists of two well-defined regions, and one-way southbound travel is a preferred option for many customers. The results indicate that a multi-fleet model has a higher service level at key centers and higher utilization. At the same time, the single-fleet model is relatively more profitable at the expense of a lower service level in key centers due to vehicles accumulating in the South Island. Development of a Simulation Framework for Urban Ropeway Systems and Analysis of the Planned Ropeway Network in Regensburg, Germany Simon Haimerl, Christoph Tschernitz, Tobias Schiller, Christoph Weig, Stefan Galka, and Ulrich Briem (Ostbayerische Technische Hochschule Regensburg) Abstract Abstract To evaluate the performance of a ropeway in an urban environment, simulations of the dynamic passenger transport characteristics are required. Therefore, a modular simulation model for urban ropeway networks was developed, which can be flexibly adapted to any city and passenger volume. This simulation model was used to analyze the ropeway network concept of the German city Regensburg and to determine the expected operating conditions. The passenger volume, different types of persons, their occurrence probability and their destination distribution is depending on the location and daytime and can be defined for each individual station. In an initial analysis, the number of passengers currently occurring in bus traffic were projected onto the ropeway network. To enable climate-friendly and efficient operation, different strategies were developed to significantly reduce the number of gondolas. The best fitting strategies resulted in significant cost savings while passenger comfort, as represented by queue time, remained unchanged. Agent-based Simulation Methodological Issues with Multi-Agent Games Chair: Yan Lu (Old Dominion University) Extending The Naming Game In Social Networks To Multiple Hearers Per Speaker Aradhana Soni, Kalyan S. Perumalla, and Xueping Li (University of Tennessee) Abstract Abstract Social conventions govern numerous behaviors of humans engaging in day-to-day activities from how they greet to languages they speak. The classical Naming Game algorithm has been defined with inherently sequential semantics where agents engage in pairwise interactions and reach global consensus in the absence of any outside coordinating authority. In this paper, we extend the classic Naming Game to multiple hearers per speaker in each conversation even while allowing simultaneous “speaking” and “hearing”. We simulate the impact on the number of conversations for convergence by varying the number of hearers and investigate the impact of multiple network types and agent population sizes on the global convergence. The results show that our extended model combining simultaneous conversations and multiple hearers per speaker per conversation makes the words diffuse at a much faster rate and leads to significantly faster consensus formation. A Bayesian Uncertainty Quantification Approach for Agent-Based Modeling of Networked Anagram Games Xueying Liu, Zhihao Hu, and Xinwei Deng (Virginia Tech) and Chris Kuhlman (University of Virginia) Abstract Abstract In group anagram games, players cooperate to form words by sharing letters that they are initially given. The aim is to form as many words as possible as a group, within five minutes. Players take several different actions: requesting letters from their neighbors, replying to letter requests, and forming words. Agent-based models (ABMs) for the game compute likelihoods of each player’s next action, which contain uncertainty, as they are estimated from experimental data. We adopt a Bayesian approach as a natural means of quantifying uncertainty, to enhance the ABM for the group anagram game. Specifically, a Bayesian nonparametric clustering method is used to group players into different clusters without pre-specifying the number of clusters. Bayesian multi-nominal regression is adopted to model the transition probability among different actions of the players in the ABM. We describe the methodology and the benefits of it, and perform agent-based simulations of the game. Agent-based Simulation Emergent Behaviors and Construction Labor Productivity Chair: Chris Kuhlman (University of Virginia) Identifying Correlates of Emergent Behaviors In Agent-Based Simulation Models Using Inverse Reinforcement Learning Best Contributed Theoretical Paper - Finalist Faraz Dadgostari (University of Virginia); Stephen Adams and Peter Beling (Virginia Tech National Security Institute); Henning S. Mortveit (University of Virginia, Department of Engineering Systems and Environment (SEAS)); and Samarth Swarup (University of Virginia) Abstract Abstract In large agent-based models, it is difficult to identify the correlate system-level dynamics with individual-level attributes. In this paper, we use inverse reinforcement learning to estimate compact representations of behaviors in large-scale pandemic simulations in the form of reward functions. We illustrate the capacity and performance of these representations identifying agent-level attributes that correlate with the emerging dynamics of large-scale multi-agent systems. Our experiments use BESSIE, an ABM for COVID-like epidemic processes, where agents make sequential decisions (e.g., use PPE/refrain from activities) based on observations (e.g., number of mask wearing people) collected when visiting locations to conduct their activities. The IRL-based reformulations of simulation outputs perform significantly better in classification of agent-level attributes than direct classification of decision trajectories and are thus more capable of determining agent-level attributes with definitive role in the collective behavior of the system. We anticipate that this IRL-based approach is broadly applicable to general ABMs. Agent-Based Modelling and Simulation of Multidimensional Impacts of Construction Labor Productivity Factors Lynn Shehab, Diana Salhab, Elyar Pourrahimian, Mohamed ElMenshawy, and Farook Hamzeh (University of Alberta) Abstract Abstract Despite numerous attempts to quantify impacts of factors influencing productivity in the construction industry, such factors are still perceived as static and independent, resulting in unrealistic productivity estimates. Therefore, this paper investigates the different factors’ impacts on not only productivity, but also each other. The objective is to highlight the necessity of perceiving the already heavily researched factors affecting productivity as dynamic and interdependent through a multidimensional lens. Two generic agent-based models are built to simulate the outcomes of a project through varying levels of detail, each investigating a certain set of impacts. The first model includes the quantified impacts of the factors on productivity (traditional approach), while the second encompasses all quantified impacts of the factors on productivity and on each other (comprehensive approach). Findings proved the accuracy of the proposed comprehensive approach in estimating durations compared to planned durations and to those obtained from the traditional approach. Agent-based Simulation Evacuation Modeling and Societal Polarization Chair: Anastasia Anagnostou (Brunel University London) Simulating Emergency Evacuations with a Learnable Behavioural Model Muhammad Shalihin Bin Othman and Gary Tan (National University of Singapore) Abstract Abstract Simulation of evacuations in emergencies is crucial in preparing authorities throughout the world to mitigate disastrous outcomes from an unforeseen crisis. In an effort to increase the effectiveness of such critical systems, several works have attempted to introduce intelligence in Multi-Agent Systems (MAS) for crisis simulation by incorporating psychological behaviours learned from the social sciences or by using data-driven machine learning models with predictive capabilities. A recently proposed Conscious Movement Model (CMM) has shown dynamic capabilities to learn and change an agent’s movement patterns as conditions evolve in the environment. This research work proposes an effective framework to integrate the trained CMM into a simulation model for emergency evacuations in order to achieve realistic outcomes. The evaluation is carried out on a real-life case study of emergency evacuations in a classroom. The results show that we can produce realistic simulations similar to actual results, performing better than state-of-the-art methods. Simulation-based Analysis of Evacuation Elevator Allocation for A Multi-level Hospital Emergency Department Best Contributed Applied Paper - Finalist Boyi Su and Jaeyoung Kwak (Nanyang Technological University); Ahmad Reza Pourghaderi (Singapore Health Services); Michael H. Lees (Amsterdam UMC, University of Amsterdam); Kenneth Tan, Shin Yi Loo, Ivan S. Y. Chua, and Joy L. J. Quah (Singapore General Hospital); Wentong Cai (Nanyang Technological University); and Marcus E. H. Ong (Duke-NUS Medical School) Abstract Abstract Evacuation planning for hospital emergency departments is challenging because of the large number of patients with limited mobility due to severe illness. For trolley-ridden patients, elevators are often the only available mode for vertical evacuations. Thus, allocation of trolley-ridden patients to elevators is important to reduce vertical evacuation time with limited number of elevators. We developed a simulation model of vertical evacuation using elevators and applied the model to the future Singapore General Hospital emergency department as a case study. In the case study, we divided trolley-ridden patients into several groups based on their locations and evaluated the maximum evacuation time for various allocation setups. Simulation results show that evacuation on the lower level is sensitive to the allocation on the upper level. The overlapping utilization of the shared elevator by each level may lead to long queuing time at the lower levels and consequently increase the overall evacuation time. The Effect of Influencers on Societal Polarization John Maurice Betts (Monash University) and Ana-Maria Bliuc (The University of Dundee) Abstract Abstract Societies can polarize when there is disagreement on important issues. The rise of social media in recent years has led to the phenomenon of influencers, who are now prominent in public debate, especially online. However, their effect on polarization is not well understood. To address this gap in knowledge, we create an Agent-Based Model of a society into which an influencer is introduced and their effect on polarization observed. Results show that an influencer holding extreme views will always increase the rate of polarization, with this effect increasing in line with their reach and activity level. The effect of a neutral influencer varies with the tolerance for opposing beliefs in the society: slowing the rate of polarization for relatively tolerant societies, but increasing the rate when societies are more conservative, or the influencer has narrow reach. Consequently, these results have implications for the design of influencer campaigns for social good. Track Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), John Shortle (George Mason University) Aviation Modeling and Analysis Aviation I : Human-in-the-Loop Chair: Dehghani Mohammad (Northeastern University) A Heuristic-based Airport Shopping Behavior Model with Agent-based Simulation Yimeng Chen, Cheng-Lung Wu, and NGAI KI MA (UNSW Sydney) Abstract Abstract In recent years, the source of airport revenue has significantly changed. Accordingly, many airports have adjusted their strategies and focused on increasing retail revenue to improve financial sustainability. However, there is a lack of application of shopping behavior models to airport retail development. This paper aims to fill this gap by presenting a heuristic shopping behavior model. First, this paper briefly reviews the existing literature on heuristic shopping behavior. Second, two data collections were conducted at a live airport to calibrate and validate the proposed agent-based simulation model. The Mean Absolute Percentage Error of the model stands at 5.3% on total footfall across all shops. The validation result demonstrates the feasibility of the proposed model in simulating the heuristic shoppers in airport retail. The proposed model provides an excellent foundation for future scenario studies on airport retail. Modelling Aircraft Priority Assignment by Air Traffic Controllers during Taxiing Conflicts Using Machine Learning VIDURVEER DUGGAL, Thanh-Nam Tran, Duc Thinh Pham, and Sameer Alam (Nanyang Technological University) Abstract Abstract Conflicts between taxiing aircraft are resolved by making the aircraft with lower priority wait, slow down, or change their path. Prevalent priority assignment is based on rules such as First Come First Serve. However, this is not viable as priority assignment done by an air-traffic controller (ATC) based on multiple factors. Thus, a machine learning approach is proposed to mimic an ATC’s priority assignment. Firstly, the potential conflict scenarios between two aircraft from historical data, which are resolved, are detected and extracted. Then a Random Forest model is developed to learn ATC’s behaviors. The model mimics ATC’s behavior with an accuracy of 89% and can thus be an effective approach for priority assignment in path-planning and conflict resolution. Further analysis indicates that features such as unimpeded time difference, distance to destination and start, and speed are major considerations that affect the ATC’s decisions A Multi-Agent Reinforcement Learning Approach for System-Level Flight Delay Absorption Kanupriya Malhotra, Zhi Jun Lim, and Sameer Alam (Air Traffic Management Research Institute) Abstract Abstract With increasing air traffic, there is an ever-growing need for Air Traffic Controllers (ATCO) to efficiently manage traffic and congestion. Congestion often leads to increased delays in the Terminal Maneuvering Area (TMA), causing large amounts of fuel burn and detrimental environmental impacts. Approaches such as the Extended Arrival Manager (E-AMAN) propose solutions to absorb such delays, whereby flights are scheduled much before they enter the TMA. However, such an approach requires a speed management system where flights can coordinate to absorb system-level delays in their en-route phase. This paper proposes a Multi-Agent System (MAS) approach using Deep Reinforcement Learning to model and train flights as agents which can coordinate with each other to effectively absorb system-level delays. The simulations utilize MultiAgent POsthumous Credit Assignment in Unity and test two reward approaches. Initial findings reveal an average of 3.3 minutes of system-level delay absorptions from a required delay of 4 minutes. Aviation Modeling and Analysis Aviation II: Aviation Operations and Airspace Chair: Michael Schultz (Bundeswehr University Munich) Towards Automated Apron Operations - Training of Neural Networks for Semantic Segmentation using Synthetic LiDAR Sensors Michael Schultz (Bundeswehr University Munich), Stefan Reitmann and Bernhard Jung (Freiberg University of Mining and Technology), and Sameer Alam (Nanyang Technological University) Abstract Abstract For safe operations at the airport apron, controllers are supported by an appropriate sensor environment. Deep learning models could improve the classification of observed objects, but these models require a large amount of data to be trained. Therefore, we developed a virtual airport environment to generate the required training and validation data for any operational scenario. A synthetic LiDAR sensor is implemented in this environment and applied at Singapore Changi Airport. Using different data sources, the airport infrastructure and objects are modelled and a multitude of 3D scenes are generated. From these scenes, a point cloud is extracted from the LiDAR sensor feedback. This point cloud is already labelled by the underlying models (ground truth) and serves as input for PointNet++ to be trained for efficient segmentation and classification. We show that the training with synthetic input data is a promising approach even assuming degradation of the sensor feedback. On the Role of Hla-based Simulation in New Space Frank Morlang (German Aerospace Center (DLR)) and Steffen Strassburger (Technische Universität Ilmenau) Abstract Abstract This paper discusses High Level Architecture (HLA) based simulation in the context of the emergence of the private spaceflight industry called New Space. We postulate that distributed simulation plays a fundamental role in facilitating new opportunities of a cost efficient access to space. HLA defines a simulation system’s architecture framework with a focus on reusability and interoperability. The article will therefore discuss the impact of its usage on the potential of affordable new aerospace systems developments. Future possibilities with an increased level of loose component coupling are presented. Discrete- Event Supervisory Control for the Landing Phase of a Helicopter Flight James Horner, Tanner TRAUTRIM, and Cristina Ruiz-Martín (Carleton University); Iryna BORSHCHOVA (National Research Council of Canada); and Gabriel Wainer (Carleton University) Abstract Abstract We introduce a new method for supervisory control used in the landing phase of an autonomy system for the Bell-412 Advanced Systems Research Aircraft, which includes fly-by-wire capabilities. The complexity of the autonomy system makes it necessary to include a high-level supervisory controller that monitors mission’s progress and allocates resources on board accordingly. Supervisory controllers are commonly embedded within a monolithic program and lack explicit state flows, requiring significant effort to modify and test the system’s behavior. This research uses the DEVS formalism and the Cadmium simulation engine to model, implement, verify, validate, test, and deploy a state-based and event-driven supervisory controller for helicopters. We use the NRC’s Bell-412 helicopter autonomy system as a case study to present the whole development cycle. The methodology is illustrated with simulated models that were tested using graphical specifications and domain experts and verified using Cadmium in both simulated and real-time testing suites Track Coordinator - Complex and Resilient Systems: Saurabh Mittal (MITRE Corporation), Claudia Szabo (The University of Adelaide, University of Adelaide) Complex and Resilient Systems Modeling and Data and their Effect on Policy Chair: Claudia Szabo (University of Adelaide, The University of Adelaide) Exploring Covid-19 Survivor Perception toward Governement’s Policies in Responding to Covid-19 Anggraini Dwi Saputri and Hilya Mudrika Arini (Universitas Gadjah Mada) Abstract Abstract The effective strategies could be generated by understand the big picture of problem through system thinking. Besides finding the cure of Covid-19, the stakeholder need to formulate the good risk communication to their society thus the society could accept the right message and act the right response. Before formulating the good risk communication, it is very important to understand our society and their perception toward the pandemic. The understanding of perception is necessary so we could give the balance response between risk and response. The exploration through Causal Loop Diagram (CLD) can show the structure of given system and help to capture the mental model of object. This study aims to develop CLD model of risk perception toward government attempts in handling Covid-19, so that it can help government to formulate the strategies by proposing suggestion based in risk perception of society. SITEM: A Framework for Integrated Transport and Energy Systems Modelling for City-wide Electrification Scenario Planning Muhamad Azfar Ramli, Ilya Farber, and Zheng Qin (Institute of High Performance Computing, A*STAR) and Tobias Massier and David Eckhoff (TUMCREATE Ltd) Abstract Abstract Adopting transport electrification is a key initiative for cities combating climate change. However, transitioning from combustion engine vehicles refueling at petrol stations to battery-electric vehicles (EVs) that need to charge from the distribution grid, presents significant challenges in planning for new infrastructure and behaviour patterns. We developed an advanced modelling framework codenamed SITEM (Singapore Integrated Transport and Energy Model), that combines behavioural modelling, geospatial optimisation, agent-based simulation of multi-modal city-wide transport and a high-fidelity digital twin of the distribution grid. The framework is designed to address infrastructure planning challenges faced by local agencies such as determining the optimal placement of chargers, and quantifying the impact of electrification on the grid. We showcase some key technical achievements of the model, including simulation-based validation of the quality of different charger placement schemes, and determining the efficacy of smart charging in shared public residential chargers in helping to reduce peak energy demand. Effects of Information Sharing on Swarm Based Communication In Dynamic Environments Jenny Tran (University of Adelaide); Peter Furle (Swordfish Computing); and Claudia Szabo (University of Adelaide, The University of Adelaide) Abstract Abstract The use of swarm intelligence aims to aid information sharing in contested and dynamic environments, where network bandwidth is limited and computational resources are overused. In this paper, we examine the effectiveness of a swarm-inspired data ferrying algorithm in the context of a dynamic environment. We simulate a beehive and observe how bees perform as a swarm through exchanging incomplete data items. Our analysis shows improvements of up to 35\% when a stigmergic approach is used, showing the benefit of stigmergic information sharing within various communication protcols. Complex and Resilient Systems Cybersecurity in Complex Resilient Systems Chair: Claudia Szabo (University of Adelaide, The University of Adelaide) Cyber Deception Metrics for Interconnected Complex Systems Md ali reza Al amin and Sachin Shetty (Old Dominion University) and Charles Kamhoua (US Army Research Laboratory) Abstract Abstract Cyber attackers' evolving skills cause it challenging to secure the network. Thus it is paramount to characterize adversarial strategies and estimate the attacker's capability. Furthermore, estimating the adversarial capability can aid the cyber defender when deciding to place deceptive elements in the network. In this paper, we address the problem of characterizing adversarial strategies and develop a suite of metrics that quantify the opportunity and capability of the adversary. Using these metrics, the cyber defender can estimate the attacker's capability. In our simulation, we incorporated the developed metrics to estimate adversary capabilities based on the attacker's aggression, knowledge, and stealthiness level. To minimize the adversarial impact, we consider placing decoy nodes as deceptive elements in the network and measure the effectiveness of having decoy nodes. Our experimental evaluation suggests that placing decoy nodes in the network can effectively increase the attacker's resource usage and decrease the win percentage. A Dynamic Theory of Security Free-Riding by Firms in the WFH Age Ranjan Pal (University of Cambridge), Rohan Xavier Sequeira (University of Michigan), and Yufei Zhu and Yushi She (Univeristy of Michigan) Abstract Abstract The COVID-19 pandemic has radically transformed the work-from-home (WFH) paradigm, and expanded an organization’s cyber-vulnerability space. We propose a novel strategic method to quantify the degree of sub-optimal cybersecurity in an organization of employees, all of whom work in heterogeneous WFH ”siloes”. Specifically, we model the per-unit cost of asymmetric WFH employees to invest in securityimproving effort units as time-discounted exponential martingales over time, and derive as benchmark - the centrally-planned socially optimal aggregate employee effort at any given time instant. We then derive the time-varying strategic Nash equilibrium amount of aggregate employee effort in cybersecurity in a distributed setting. The time-varying ratio of these centralized and distributed estimates quantifies the free riding dynamics, i.e., security sub-optimality, within an organization. Rigorous estimates of the degree of sub-optimal cybersecurity will drive organizational policy makers to design appropriate (customized) solutions that voluntarily incentivize WFH employees to invest in required cybersecurity best practices. Track Coordinator - Covid-19 and Epidemiological Simulations: Edward Huang (George Mason University), Hui Xiao (Southwestern University of Finance and Economics) COVID-19 and Epidemiological Simulations Effectiveness of Interventions Against the Spread of COVID-19 Chair: Edward Huang (George Mason University) Calibrating Simulation Models with Sparse Data: Counterfeit Supply Chains During COVID-19 Isabelle M. van Schilt and Jan H. Kwakkel (Delft University of Technology), Jelte P. Mense (Utrecht University), and Alexander Verbraeck (Delft University of Technology) Abstract Abstract COVID-19 related crimes like counterfeit Personal Protective Equipment (PPE) involve complex supply chains with partly unobservable behavior and sparse data, making it challenging to construct a reliable simulation model. Model calibration can help with this, as it is the process of tuning and estimating the model parameters with observed data of the system. A subset of model calibration techniques seems to be able to deal with sparse data in other fields: Genetic Algorithms and Bayesian Inference. However, it is unknown how these techniques perform when accurately calibrating simulation models with sparse data. This research analyzes the quality-of-fit of these two model calibration techniques for a counterfeit PPE simulation model given an increasing degree of data sparseness. The results demonstrate that these techniques are suitable for calibrating a linear supply chain model with randomly missing values. Further research should focus on other techniques, larger set of models, and structural uncertainty. Regional Maximum Hospital Capacity Estimation for Covid-19 Pandemic Patient Care in Surge through Simulation Best Invited Applied Paper - Finalist Bahar Shahverdi, Hadi Ghayoomi, and Elise Miller-Hooks (George Mason University); Mersedeh Tariverdi (World Bank Group); and Thomas Kirsch (Uniformed Services University) Abstract Abstract Estimating the capacity of a region to serve pandemic patients in need of hospital services is crucial to regional preparedness for pandemic surge conditions. This paper explores the use of techniques of stochastic discrete event simulation for estimating the maximum number of pandemic patients with intensive care and/or in-patient, isolation requirements that can be served by a consortium of hospitals in a region before requesting external resources. Estimates from the model provide an upper bound on the number of patients that can be treated if all hospital resources are re-allocated for pandemic care. The modeling approach is demonstrated on a system of five hospitals each replicating basic elements (e.g. number of beds) of the five hospitals in the Johns Hopkins Hospital System in the Baltimore-Washington, D.C. Metropolitan area under settings relevant to the COVID-19 pandemic. Simulating Counterfeit Personal Protective Equipment (PPE) Supply Chains During Covid-19 Layla Hashemi, Chu Chuan Jeng, Ahna Mohiuddin, Edward Huang, and Louise Shelley (George Mason University) Abstract Abstract Increased demand for medical supplies, and specifically respirators and face masks, during the COVID-19 pandemic along with the inability of legitimate suppliers to meet these needs created a window of opportunity for counterfeiters to capitalize on the supply chain disruptions caused by a global health crisis. Both legitimate and illicit businesses began shifting their scope from sectors such as textiles to producing and distributing personal protective equipment (PPE), many of which were counterfeit or unauthentic products and thus unable to properly protect users. To study cost-effective disruption strategies, this study proposes a simulation-optimization framework. The framework is used to model counterfeiters’ behavior and analyze the effectiveness of different disruption strategies for counterfeit PPE supply chains during the COVID-19 pandemic. COVID-19 and Epidemiological Simulations Modeling the Spread of COVID-19 Chair: Felisa Vazquez-Abad (Hunter College CUNY) Covid-19 Supply Chain Planning: A Simulation-optimization Approach Samaneh Maghoulan, Hande Musdal Ondemir, and Mohammad Dehghanimohammadabadi (Northeastern University) Abstract Abstract Healthcare providers’ preparedness and response plans are crucial to effectively cope with infectious disease outbreaks such as COVID-19. These plans need to provide strategic and operational actionable insights to guarantee the availability of essential resources when needed. This study uses a simulation-optimization approach to (i) determine an optimal replenishment policy to restock personal protective equipment (PPE) items, and (ii) determine proactive demand planning for critical resources such as the number of beds, and ventilators. This model leverages a Simio-MATLAB integration to complete simulation and optimization tasks. Impact of Vaccination Policies for COVID-19 using Hybrid Simulation Felisa J. Vazquez-Abad (CUNY), Daniel Dufresne (Concordia University), and Gi-Beom Park (Hunter College CUNY) Abstract Abstract A stochastic model for individual immune response is developed. This model is then incorporated in a larger simulation model for the spread of COVID-19 in a population. The simulator allows random transitions between being susceptible, exposed, having mild or severe symptoms, as well as random non-exponential sojourn times in those states. The model is more efficient than others based on geographical location, where the virus spreads according to actual distance between individuals. We are able to simulate much larger populations and vary parameters such as time between vaccinations, probability of infection, and so on. We present an application to study the effects on healthcare as a function of vaccination policies. Agent Based Simulatable City Digital Twin to Explore Dynamics of Covid-19 Pandemic Souvik Barat and Vinay Kulkarni (Tata Consultancy Services Ltd); Aditya Paranjape (Imperial College London); and Ritu Parchure, Shrinivas Darak, and Vinay Kulkarni (Prayas Health Group) Abstract Abstract Predicting the evolution of Covid19 pandemic has been a challenge as it is significantly influenced by the characteristics of people, places and localities, dominant virus strains, vaccines, and adherence to pandemic control interventions. Traditional SEIR based analyses help to arrive at a ‘lumped up’ understanding of pandemic evolution which is found wanting to determine locality-specific measures of controlling the pandemic. We comprehend the problem space from system theory perspective to develop a fine-grained city digital-twin as an “in-silico” experimentation aid and exploit simulation experimentation technique to systematically explore - Which indicators influence infection spread to what extent? Which intervention to introduce when to control the pandemic with a-priori assurance? How best to return to a new normal without compromising individual health-safety? This paper presents a digital twin centric simulation-based approach, illustrates it in a real-world context of an Indian City, and summarizes the learning and insights based on this experience. COVID-19 and Epidemiological Simulations Models and Case Studies of COVID-19 Impacts and Interventions Chair: Philippe J. Giabbanelli (Miami University) Towards Reusable Building Blocks To Develop COVID-19 Simulation Models Shane Schroeder, Christopher Vendome, and Philippe J. Giabbanelli (Miami University) and Alan M. Montfort (IMT Mines Ales) Abstract Abstract Modeling & Simulation has played an essential role in supporting the decision-making activities of policymakers for COVID-19. However, a proliferation of models has been noted in the literature, and new models are only more likely to emerge given the shift to long-term management of the disease and the call for highly tailored tools. Having a multiplicity of models can have benefits, for example when contributing to ensembles of models. However, if each model is created from scratch, there is significant redundancy in efforts hence time inefficiency and a heightened risk of bugs. Our study examines the naturally occurring practices of modelers who wrote COVID-19 models in NetLogo to identify redundancy in code and thus suggest reusable `building blocks' that would speed-up the process of model development as well as improving code quality. Based on 28 models, we identified five themes and discussed their transformation into potential building blocks for simulation. Modelling the Delta Covid-19 Wave in Mumbai Sandeep Juneja and Daksh Mittal (Tata Institute of Fundamental Research) Abstract Abstract Using our agent based simulator (ABS), we attempt to explain the infectiousness of the delta variant through scenario analysis to best match the observed fatality data in Mumbai, where the variant initially spread. Our somewhat prescient conclusion, based on analysis conducted in March-April 2021 was that the new variant was 2-2.5 times more infectious than the original Wuhan variant. We also observed then that certain performance measures such as timings of peaks and troughs were quite robust to the variations in model parameters and hence can be reliably projected even in presence of model uncertainties. Furthermore, we introduce enhancements to help model variants, vaccinations, basic-reproduction number, effective-reproduction number in ABS. Our analysis suggests an interesting observation - although slums have around half of Mumbai population and are much more dense and have higher prevalence, surprisingly the effective-reproduction number between slums and non-slums equalizes early on and largely moves together thereafter. Modeling and Simulation for the Spread of Covid-19 in an Indian City: A Case Study Aditya Paranjape, Souvik Barat, and Anwesha Basu (Tata Consultancy Services Ltd); Rohan Salvi (University of Illinois at Urbana-Champaign); and Supratim Ghosh and Vinay Kulkarni (Tata Consultancy Services Ltd) Abstract Abstract We present a case study on modeling and predicting the course of Covid-19 in the Indian city of Pune. The results presented in this paper are concerned primarily with the wave of infections triggered by the Delta variant during the period between February and June 2021. Our work demonstrates the necessity for bringing together compartmental stock-and-flow and agent-based models and the limitations of each approach when used individually. Some of the work presented here was carried out in the process of advising the local city administration and reflects the challenges associated with employing these models in a real-world environment with its uncertainties and time pressures. Our experience, described in the paper, also highlights some of the risks associated with forecasting the course of an epidemic with evolving variants COVID-19 and Epidemiological Simulations Agent-based Models for Tracking the Spread of COVID-19 Chair: Xiao Feng Yin (Institute of High Performance Computing, A*STAR Singapore) Evaluating the Covid-19 Screening Regime for Cross-border Workers Xiao Feng Yin (Institute of High Performance Computing, A*STAR Singapore); Yiqi Seow (Institute of Bioengineering and Bioimaging); Haiyan Xu, Xiuju Fu, and Zheng Qin (Institute of High Performance Computing, A*STAR Singapore); Hong Kiat Tan (Agency for Science, Technology and Research (A*STAR)); Li Yang Hsu (National University of Singapore); Kiesha Prem (London School of Hygiene & Tropical Medicine); and Sze Wee Tan (Agency for Science, Technology and Research (A*STAR)) Abstract Abstract Global travel and trade have been hit hard by the COVID-19 pandemic. Border closures have impacted both leisure and business travel. The socioeconomic costs of border closure are particularly severe for individuals living and working across state lines, for which previously unhindered passage has been curtailed, and daily commute across borders is now virtually impossible. Here, we examine how the periodic screening of daily cross-border commuters across territories with relatively low COVID-19 incidence will impact the transmission of SARS-CoV-2 across borders using agent-based simulation. We find that periodic testing at practical frequencies of once every 7, 14 or 21 days would reduce the number of infected individuals crossing the border. The unique transmission characteristics of SARS-CoV-2 suggest that periodic testing of populations with low incidence is of limited use in reducing cross-border transmission and is not as cost-effective as other mitigation measures for preventing transmission. Assessing Transmission Risks of SARS-CoV-2 Omicron Variant in U.S. School Facilities and Mitigation Measures Yifang Xu (University of Tennessee, Knoxville); Siyao Zhu (Johns Hopkins University); Shuai Li (University of Tennessee, Knoxville); Jiannan Cai (University of Texas at San Antonio); and Qiang He (University of Tennessee, Knoxville) Abstract Abstract The emergence of the SARS-CoV-2 Omicron variant raises concerns for school operations worldwide. The Omicron variant spread faster than other variants that cause COVID-19, and breakthrough infections are reported in vaccinated people. Schools are hotbeds for the transmission of the highly contagious virus. Therefore it is crucial to understand the risks of Omicron transmission and the effectiveness of different measures to prevent the surge of infection cases. This study estimates the risks of airborne transmission and fomite transmission of Omicron variants using simulations and the data of 11,485 public and private schools in the U.S. It also analyzes the impact of different mitigation measures on limiting airborne transmission and fomite transmission risks in schools. It was found that the Omicron variant caused relatively high infection risks in schools. The risk of airborne transmission is nine times higher than fomite transmission. The effective mitigation measures can significantly decrease the transmission risk. Effect of Vaccination on Risk of Exposure to Airborne Infectious Disease During the Boarding Process in a Commercial Aircraft using Agent-Based Simulation Bruna Helena Pedroso Fabrin, Denise Beatriz Ferrari, José Danieel Leite, Amanda Zíngara Roza, and Bren Dabela Luna (Aeronautics Institute of Technology) Abstract Abstract As increasing proportions of the world’s population have received at least one dose of the vaccine against COVID-19, everyday activities start to be resumed, including travels. The present study investigates the impact of immunization on the risk of exposure to an infectious disease, such as COVID-19, during the boarding process in a commercial airplane. An agent-based simulation model considers different vaccine types and vaccination rates among passengers. The results show significant decrease in the median exposure risk, when the vaccination rate increases from 0% to 100%, but also that people in seats adjacent to an infectious passenger are in much higher risk, for a similar vaccination coverage. Such results provide quantitative evidence of the importance of mass immunization, and also that, when full vaccination is not guaranteed for 100% of passengers, it may be recommendable to avoid full occupancy of the aircraft, by implementing physical distancing when assigning seats. Track Coordinator - Data Science and Simulation: Abdolreza Abhari, Abdolreza Abhari (Ryerson University), Cheng-bang Chen (University of Miami), Mani Sharifi (Ryerson University) Data Science and Simulation Artificial Intelligence/Machine Learning in DSS I Chair: Abdolreza Abhari High-Resolution Shape Deformation Prediction in Additive Manufacturing using 3D CNN Best Contributed Applied Paper - Finalist Benjamin Standfield, Rongxuan Wang, Denis Gracanin, and Zhenyu Kong (Virginia Tech) Abstract Abstract Additive manufacturing (AM) processes usually have lower geometric quality and are less reliable when compared to subtractive manufacturing processes. However, AM processes are seeing more use in industry because they are both affordable and flexible. To address the lower geometric quality and reduced reliability drawbacks, 3D Convolutional Neural Networks (CNN) were developed and used to predict the deformations from the ideal sliced 3D object. The developed 3D CNN were tested on a live dataset consisting of 50 3D printed, 3D scanned, and aligned objects. The linear spatial resolution of these predictions is improved to 100$\mu$m with a sampling frequency of 166 units per an inch compared to standard peak resolution of 64 units across an axis. The results indicate that using the described approach provides better predictors of part geometry than the original STL file defining the part. An average increase of F1 measure is 0.0644 over using the STL. A New Application of Machine Learning: Detecting Errors in Network Simulations Maciej Wozniak (KTH Royal Institute of Technology) and Luke Liang, Hieu Phan, and Philippe J. Giabbanelli (Miami University) Abstract Abstract After designing a simulation and running it locally on a small network instance, the implementation can be scaled-up via parallel and distributed computing (e.g., a cluster) to cope with massive networks. However, implementation changes can create errors (e.g., parallelism errors), which are difficult to identify since the aggregate behavior of an incorrect implementation of a stochastic network simulation can fall within the distributions expected from correct implementations. In this paper, we propose the first approach that applies machine learning to traces of network simulations to detect errors. Our technique transforms simulation traces into images by reordering the network's adjacency matrix, and then training supervised machine learning models. Our evaluation on three simulation models shows that we can easily detect previously encountered types of errors and even confidently detect new errors. This work opens up numerous opportunities by examining other simulation models, representations (i.e., matrix reordering algorithms), or machine learning techniques. Using Deep Learning for Simulation of Real Time Video Streaming Applications Abdolreza Abhari and Dipak Pudasaini (Ryerson University) Abstract Abstract The traditional approaches for simulation of video analytics applications suffer from the lack of real-data generated by employed machine learning techniques. Machine learning methods need huge data that causes network congestion and high latency in cloud-based networks. This paper proposes a novel method for performance measurement and simulation of video analytics applications to evaluate the solutions addressing the cloud congestion problem. The proposed simulation is achieved by building a model prototype called Video Analytic Data Reduction Model (VADRM) that divides video analytic jobs into smaller tasks with fewer processing requirements to run on edge networking. Real data generated from VADRM prototype is characterized and tested by curve fitting to find the distribution models for generating the larger number of artificial data for resource management simulation. Distribution models based on real data of CNN-based VADRM prototype are used to build a queueing model and comprehensive simulation of real-time video analytics applications. Data Science and Simulation Artificial Intelligence/Machine Learning in DSS II Chair: Rong Zhou (IHPC, A-STAR) An Approach to Population Synthesis of Engineering Students for Understanding Dropout Risk Danika Dorris, Julie Ivy, and Julie Swann (North Carolina State University) Abstract Abstract Dropping out of STEM remains a critical issue today, and it would be useful for universities to have reliable predictive models to detect students’ dropout risks. Generating a synthetic population of the true population could be useful for simulating the system and testing scenarios. We outline an approach for creating a synthetic population of students in STEM and build a microsimulation which simulates students’ risk behaviors over time. This process has identified several areas that must be addressed before the synthetic population represents the true population in a simulation. Discrete Event Simulation Using Distributional Random Forests to Model Event Outcomes Sean Reed and Magnus Löfstrand (Örebro University) Abstract Abstract In discrete event simulation (DES), the events are random (aleatory) and typically represented by a probability distribution that fits the real phenomena that is studied. The true distributions of event outcomes, which may be multivariate, are often dependent on the values of covariates and this relationship may be complex. Due to difficulties in representing the influence of covariates within DES models, often only the averaged distribution or expected value of the conditional distribution is used. However, this can reduce modelling accuracy and prevent the model from being used to study the influence of covariates. Distributional random forests (DRF) are a machine learning technique for predicting the multivariate conditional distribution of an outcome from the values of covariates using an ensemble of decision trees. In this paper, the benefits of utilizing DRF in DES are explored through comparison with alternative approaches in a model of a powder coating industrial process. Machine Learning Based Simulation for Fault Detection in Microgrids Joshua Darville, Temitope Runsewe, Abdurrahman Yavuz, and Nurcin Celik (University of Miami) Abstract Abstract Fault detection (FD) is crucial for a functioning microgrid (MG) but is particularly challenging since faults can stay undetected indefinitely. Hence, there is a need for real-time, accurate FD in the early phase of MG operations to mitigate small initial deviations from nominal conditions. To address this need, we propose an FD framework for MG operational planning. Our proposed framework is synthesized from i) a dataset generated by introducing faults into an MG with PV cells, ii) processing the dataset to train various machine learning (ML) models for FD, iii) benchmarking the resulting FD models using classification metrics, and iv) applying an appropriate fault mitigation strategy. Although noisy measurements were present during the experiment due to variations in ambient temperature and solar irradiance, our proposed FD model is shown to be both computationally efficient with an average training time of 1.76 seconds and accurate with a weighted F-score of 0.96. Data Science and Simulation Model (Theory) in DSS Chair: Rie Gaku (St. Andrew’s University, Momoyama Gakuin University) Exact Optimal Fixed Width Confidence Interval Estimation for the Mean Best Contributed Theoretical Paper - Finalist Vikas Deep and Achal Bassamboo (Northwestern University), Sandeep Juneja (TIFR), and Assaf Zeevi (Columbia University) Abstract Abstract We consider a classical problem in simulation/statistics - given i.i.d. samples of a rv, the goal is to arrive at a confidence interval (CI) of a pre-specified width $\epsilon$, and with a coverage guarantee that the mean lies in the CI with probability at least $1-\delta$ for pre-specified $\delta \in (0,1)$. This problem has been well studied in an asymptotic regime as $\epsilon$ shrinks to zero. The novelty of our analysis is the derivation of the lower bound on the number of samples required by any algorithm to construct a CI of $\epsilon-$width with \textit{the coverage guarantee for fixed $\epsilon >0$} and $\delta$, and construction of an algorithm that, under mild assumptions, matches the lower bound. For simplicity, we present our results for rv belonging to a single parameter exponential family, and illustrate its efficacy through a numerical study. Feature-Modified SEIR Model for Pandemic Simulation and Evaluation of Intervention Approaches Best Contributed Theoretical Paper - Finalist Yingze Hou and Hoda Bidkhori (University of Pittsburgh, None) Abstract Abstract SEIR (susceptible-exposed-infected-recovered) model has been widely used to study infectious disease dynamics. For instance, there have been many applications of SEIR analyzing the spread of COVID to provide suggestions on pandemic/epidemic interventions. Nonetheless, existing models simplify the population, regardless of different demographic features and activities related to the spread of the disease. This paper provides a comprehensive SEIR model to enhance the prediction quality and effectiveness of intervention strategies. The new SEIR model estimates the exposed population via a new approach involving health conditions (sensitivity to disease) and social activity level (contact rate). To validate our model, we compare the estimated infection cases via our model with actual confirmed cases from CDC and the classic SEIR model. We also consider various protocols and strategies to utilize our modified SEIR model on many simulations and evaluate their effectiveness. Data-driven Economic Analysis of Poultry Data Used in Complex Long-term Egg Production Systems Combining Simulation and Machine Learning Rie Gaku (Momoyama Gakuin University), Soemon Takakuwa (Nagoya University), Louis Luangkesorn (Highmark Health), and Hiroshi Saito (Ainan Sangyo Ltd.) Abstract Abstract A hybrid modeling approach was proposed and developed as a tool for the economic analysis of poultry breeds used in complex long-term egg production systems. The factors considered included both the stored and collected internal operational data that related to the within-breed historical life-cycle reliability and the related economic data that influenced egg prices and poultry life spans in a poultry farm. In the designed simulation models, the forecasted egg sales prices from a designed machine learning algorithm were incorporated to evaluate the economic analysis of specific poultry breeds. Our analysis results demonstrated that simulations could be combined with machine learning to serve as a powerful large-scale data analysis tool for the poultry breeds used in complex long-term egg production systems. Data Science and Simulation Service Operations Management in DSS Chair: Philippe J. Giabbanelli (Miami University) A Data-Driven Discrete Event Simulation Model to Improve Emergency Department Logistics Mohamed Nezar Abourraja, Luca Marzano, Jayanth Raghothama, and Arsineh Boodaghian Asl (KTH Royal Institute of Technology); Sven Lethvall and Nina Falk (Uppsala University Hospital); and Adam S. Darwich and Sebastiaan Meijer (KTH Royal Institute of Technology) Abstract Abstract Demands for health care are becoming overwhelming for healthcare systems around the world regarding the availability of resources, particularly, in emergency departments (EDs) that are continuously open and must serve immediately any patient who comes in. Efficient management of EDs and their resources is required more than ever. This could be achieved either by optimizing resource utilization or by the improvement of hospital layout. This paper investigates, through data-driven simulation alternative designs of workflows and layouts to operate the ED of the Uppsala University Hospital in Sweden. Results are analyzed to understand the requirements across the hospital for reduced waiting times in the ED. The main observation revealed that introducing a new ward dedicated to patients having complex diagnoses with a capacity of less than 20 beds leads to lower waiting times. Furthermore, the use of data-mining was of great help in reducing the efforts of building the simulation model. A Self-Adaptive Search Space Reduction Approach for Offshore Wind Farm Installation using Multi-Installation Vessels Shengrui Peng and Helena Szczerbicka (L3S Research Center) Abstract Abstract As an important part of renewable energy resources, offshore wind energy has great potential compared to its onshore counterpart despite the vast developments in recent decades. However, due to the more complex environmental condition and physical restrictions the installation of an offshore wind farm is hard to plan and predict, which often results in delays. This paper focuses on the scheduling problem in the installation phase of an offshore wind farm. We propose an adaptive search strategy based on the Apriori property and information entropy. The purpose is to prune the search space in an effective and intelligent way in order to realize an agile and swift rescheduling according to the environmental changes. For the numerical experiments, we use the environmental data obtained from the German North Sea from the year 1958 to 2007 in hourly resolution. Call Center Agent Scheduling Evaluation Using Discrete-Event Simulation: A Decision-Support Tool Samer Alsamadi, Clea Martinez, Nicolas Cellier, Franck Fontanili, and Canan Pehlivan (IMT Mines Albi) Abstract Abstract Call center agent scheduling is the process of assigning agents to their respective shifts throughout a day in which information regarding the volume and arrival profile of calls is unknown. The construction of such a schedule will have a direct impact on the quality of service and the finances of this call center. Of great importance is knowing when and how to assess this agent schedule despite the uncertainties of how the day will actually unfold. The answers to these questions and their contribution to maintaining a certain level of performance are explored. Through discrete-event simulation, we were able to simulate different agent schedules of a call center for the disabled community, anonymized as the abbreviation ANGUS. Our results indicate the capability of evaluating schedules based on the simulation’s predicted outcomes. With such insight, it is indeed possible to meet the performance criteria objectives developed by ANGUS. Data Science and Simulation Simulation-based Analytics in DSS Chair: Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Maritime Disruption Impact Evaluation Using Simulation and Big Data Analytics Rong Zhou, Haiyan Xu, Xiuju Fu, Xiao Feng Yin, Zheng Qin, and Liangbin Zhao (Agency for Science, Technology and Research); Pramod Verma (PSA International); and Mikael Lind (Research Institutes of Sweden (RISE)) Abstract Abstract Disruptions in maritime networks may cause significant financial burden and damage to business. Recently, some international ports have been experiencing unprecedented congestions due to the COVID19 pandemic and other disruptions. It is paramount for the maritime industry to further enhance the capability to assess and predict impacts of disruptions. With more data available from industrial digitization and more advanced technologies developed for big data analytics and simulation, it is possible to build up such capability. In this study, we developed a discrete event simulation model backed with big data analytics for realistic and valid inputs to assess impacts of the Suez Canal blockage to the Port of Singapore. The simulation results reveal an interesting finding that, the blockage occurred in the Suez Canal can hardly cause significant congestion in the Port of Singapore. The work can be extended to evaluate impacts of other types of disruptions, even occurring concurrently. Design and Implementation of Human-behave Bot for Realistic Web Browsing Activity Generation Akhil Vuputuri, Aris Cahyadi Risdianto, and Ee-Chien Chang (National University of Singapore) Abstract Abstract Cybersecurity experiments/exercises in a testbed require benign traffic to ensure a more effective and rigorous experiments/exercises process. This traffic is usually generated by humans, to camouflage the attack traffic by producing different activities/tasks in the testbed. Some agents can generate this traffic, but unfortunately, most simple agents are predictable, and their actions are easily distinguished from human behavior. We propose distributional models that best replicate specific human actions by analyzing human activity datasets. These distributional models can be employed in our orchestration system for a human agent (i.e., bot) that allows a realistic human activity generation that is easy to deploy and utilize. This paper discusses our modeling of web browsing activity data from real users into well-known distributions with parameters to be fed into our bot. Our verification and measurement results show that our bot can produce realistic web browsing activities similar to real-human behavior. Environmental and Sustainability Applications Buildings and Cities Chair: Albert Thomas (Indian Institute of Technology Bombay) A System Dynamics Simulation-based Sustainability Benchmarking Ann Francis and Albert Thomas (Indian Institute of Technology Bombay) Abstract Abstract Sustainability assessment is a multi-faceted, dynamic, and complex paradigm in the context of buildings with several social, economic and environmental interactions. Hence, the building sector lacks a robust sustainability evaluation and benchmarking mechanism. Consequently, a system thinking approach can solve these challenges due to its ability to evaluate complex systems. Therefore, this paper presents a methodological framework for sustainability benchmarking of buildings using the system dynamics modeling and simulation technique. The proposed methodology captures the dynamic trade-offs between the sustainability dimensions associated with a building while evaluating various policy scenarios for improvement. Further, through a series of system simulations, a benchmarking scale is developed to indicate the sustainability of the building as high, medium, and low levels based on the defined criteria. Results indicate the sustainability behavior of different buildings in comparison to achievable industry benchmarks for the building type evaluated while recommending measures for improvement to achieve better sustainability. Optimal Control of MAU Coils in an Existing Building Jin-Hong Kim, Young-Sub Kim, and Hyeong-Gon Jo (Seoul National University, Department of Architecture and Architectural Engineering College of Engineering); Eiji Urabe, Jeonghun Gwak, and Yongsung Park (SamSung C&T Corp, High-Tech ENG Team); and Cheol-Soo Park (Seoul National University, Department of Architecture and Architectural Engineering College of Engineering) Abstract Abstract This paper reports an optimal cooling control strategy of an makeup air unit (MAU) in an existing building. The authors developed a physics-based simulation model that can predict supply air temperature and humidity leaving two cooling coils as well as cooling energy consumption of a chiller. The control variables in this study are valve opening ratios of the two coils. With the use of the simulation model, the authors could suggest energy saving by 9.1%. Smart City Digital Twins for Public Safety: A Deep Learning and Simulation Based Method for Dynamic Sensing and Decision-making Xiyu Pan, Neda Mohammadi, and John Taylor (Georgia Institute of Technology) Abstract Abstract Technological innovations are expanding rapidly in the public safety sector providing opportunities for more targeted and comprehensive urban crime deterrence and detection. Yet, the spatial dispersion of crimes may vary over time. Therefore, it is unclear whether and how sensors can optimally impact crime rates. We developed a Smart City Digital Twin based method to dynamically place license plate reader (LPR) sensors and improve their detection and deterrence performance. Utilizing continuously updated crime record data, the convolutional long short-term memory model developed predicted areas crimes were most likely to occur. Then, a Monte Carlo traffic simulation simulated suspect vehicle movements to determine most likely routes to flee crime scenes. Dynamic LPR placement predictions were made weekly, capturing the spatiotemporal variation in crimes and enhancing LPR performance relative to static placement. We tested the proposed method in Warner Robins, GA, and results support the method’s promise in detecting and deterring crime. Environmental and Sustainability Applications Agriculture and Farming Chair: Albert Thomas (Indian Institute of Technology Bombay) Site Choice in Recreational Fisheries – Towards an Agent-Based Approach Kevin Haase (Thünen Institute for Baltic Sea Fisheries), Oliver Reinhardt (University of Rostock), Wolf-Christian Lewin and Harry V. Strehlow (Thünen Institute for Baltic Sea Fisheries), and Adelinde Uhrmacher (University of Rostock) Abstract Abstract The site choice decisions of recreational fishers (anglers) have important implications for fish stocks, fisheries management, and coastal economies by influencing catch rates and determining where economic values are created. However, the mechanisms of site choice are poorly understood. In this paper, we make the first steps toward applying agent-based modeling to improve the understanding of site choice decisions. Using an exploratory approach, we identified travel distance dependent on angler origin as a key element in site choice to rebuild travel patterns and distances. The 5-year average catches had a subdominant role for the travel patterns but could recreate the angler’s distribution among the fishing locations realistically. Utility functions combined both factors, but further model development and more realistic angler agents and catch rates are required to understand anglers’ site choices in more detail. A Simulation Model For Cooperative Robotics In Dairy Farms Berry Gerrits, Martijn Mes, and Peter Schuur (University of Twente) and Robert Andringa (Distribute) Abstract Abstract Clean floors in dairy farms are of vital importance to mitigate risks regarding cow welfare and to avoid ammonia emissions. In modern dairy farms it is common to deploy manure cleaning robots to automate the cleaning task. In the current state-of-the art, robots operate in relative solitude with no coordination or cooperation within the fleet. This paper employs discrete-event simulation to test the effectiveness of different strategies for cooperative, team-based cleaning in dairy farms. Special attention is paid to the impact of various team compositions and robot characteristics on team routing. The results look promising: the minimum cleanliness can be increased by deploying teams while simultaneously reducing the number of cow-robot interactions. Environmental and Sustainability Applications Construction and Infrastructure Chair: Neda Mohammadi (Georgia Institute of Technology) Using Simulation-Based Forecasting to Project Singapore’s Future Residential Construction Demand and Impacts on Sustainability Elyar Pourrahimian, Malak Al Hattab, Salam Khalife, Mohamed ElMenshawy, and Simaan AbouRizk (University of Alberta) Abstract Abstract Singapore’s 2030 Green Plan aims to advance the nation’s sustainable development agenda in alignment with rising global sustainability concerns. Accordingly, construction research is shifting its focus towards the sustainability impacts of the sector’s practices. Residential construction, specifically, constitutes the majority of the sector’s operations, energy use, and emissions while also having socio-economic impacts on all involved stakeholders. Therefore, this paper investigates demand trends and sustainable performance of the residential construction industry as an essential step towards achieving Singapore’s sustainable development goals. As such, this research combines system dynamics modeling and forecasting techniques to (1) forecast the future demand of Singapore’s residential sector by modeling the relationships between various influencing factors, and (2) predict the environmental and socio-economic impacts associated with the forecasted increase in demand. The research’s value lies in harnessing the power of simulation-based forecasting to aid policy-makers in attaining informed evidence-based decisions regarding the industry’s sustainable future. Modular And Extensible Pipelines For Residential Energy Demand Modeling And Simulation Swapna Thorve (University of Virginia) and Anil Vullikanti, Samarth Swarup, Henning Mortveit, and Madhav Marathe (University of Virginia, Biocomplexity Institute) Abstract Abstract The landscape of residential energy modeling is changing rapidly. With increase in the availability of data, ‘Modeling & Simulation’ systems are becoming ubiquitous. However, reusing or extending these simulations is complicated due to sparse commonality in design and interoperability. One solution to this conundrum is developing modular and extensible pipelines. In this paper, we define a set of five pipelines inspired by microservices-oriented architecture. Four modular pipeline templates are defined, Data Processing Pipeline, Modeling and Simulation Pipeline, Validation Pipeline, Visual Analytics Pipeline; each encapsulating details of important tasks in modern-day complex systems. In addition, one custom pipeline is developed, for composing tasks that can be executed concurrently, called Parallelizable Pipeline. We instantiate this pipeline architecture for designing a synthetic energy demand modeling system. The value of pipelines is demonstrated via three case studies – two of these studies provide new insights into issues related to equity and climate change impact. Modeling and Simulation to Improve Real Electric Vehicles Charging Processes by Integration of Renewable Energies and Buffer Storage Konstantin Sing (University of Erlangen-Nürnberg), Pierre Mertiny (University of Alberta), and Marco Pruckner (University of Würzburg) Abstract Abstract The present study explores a simulation model combining system dynamics and discrete-event simulation for an electric vehicle charging system. For the representation of the charging demand the model employs data from an actual facility for vehicle charging. While being connected to the electrical grid, the system is augmented by a solar photovoltaic installation and stationary battery energy storage. Multiple simulation runs were performed to analyze the considered energy system over a 1-year period and compare relevant output parameters for different system configurations and system locations. Results show that a solar photovoltaic installation can be effectively integrated. For the degree of self-sufficiency, high values of 87 % can be achieved with combined solar photovoltaic and battery energy storage systems. Environmental and Sustainability Applications, Simulation Down Under Simulation Down Under Chair: David Post (CSIRO) Modelling to Support Climate Adaptation in the Murray-Darling Basin, Australia David Post and David Robertson (CSIRO), Rebecca Lester (Deakin University), and Francis Chiew and Jorge Pena-Arancibia (CSIRO) Abstract Abstract Many models exist to assess the hydrological impacts of climate change. Some models even exist to assess the hydrological impacts of climate adaptation options. There are however a much smaller number of models designed to assess the impacts of climate adaptation options on socio-economics, the community and the environment more widely. A current program of work known as MD-WERP – the Murray-Darling Water and Environment Research Program, seeks to improve the understanding and representation of key processes in models used to underpin Basin analysis and planning. We are working with policy makers and water managers in Australian State and Federal governments to assess the impacts of climate change and climate adaptation options on hydrological, ecological and socio-economic outcomes in the Murray-Darling Basin (MDB). This will allow the Murray-Darling Basin Authority to consider a wide range of adaptation options in the review of the Murray-Darling Basin Plan scheduled for 2026. Model-Data Integration: Working Together and Systematically Resolving Discrepancies Matthew Adams (Queensland University of Technology), Felix Egger and Katherine O'Brien (The University of Queensland), Maria Vilas (Queensland Government), Hayley Langsdorf (Thoughts Drawn Out), Paul Maxwell (EcoFutures and Alluvium Consulting), Andrew O'Neill (Healthy Land and Water), Holger Maier (The University of Adelaide), Jonathan Ferrer-Mestres (Commonwealth Scientific and Industrial Research Organisation), Lachlan Stewart (Queensland Government), and Barbara Robson (Australian Institute of Marine Science) Abstract Abstract Models and data have an important but sometimes uneasy relationship. Data provide snapshots of what is happening in the system, whilst models can explore the underlying processes driving the observed behavior. Thus models and data together provide a more comprehensive description of the system than either one alone. However, engagement between modellers and those who collect data can sometimes be challenging, especially if there are discrepancies between models and the data. This presentation showcases our recent work to bridge this divide by introducing (1) a systematic framework for addressing model-data discrepancies, and (2) an action plan to improve relationships between those who model and those who measure. The systematic framework aims to equally balance the potential for discrepancies to arise from data and/or models. The action plan is presented as a light-hearted animation which highlights that modellers and data collectors both want the same thing: better decisions from better science. Water Forecasts For Enhanced Environmental Water Delivery Richard Laugesen (Australian Bureau of Meteorology, University of Adelaide) and Alex Cornish and Adam Smith (Australian Bureau of Meteorology) Abstract Abstract The Australian Bureau of Meteorology delivers a suite of operational forecast services for water-dependent decision makers, one customer group uses forecasts to inform environmental water delivery. Significant environmental sites in the Murray-Darling Basin, Australia’s food bowl, have suffered due to allocation of water to irrigated agriculture. The Enhanced Environmental Water Delivery project is a multi-agency collaboration that aims to coordinate releases of environmental water from storages with natural flow events, thereby achieving ecological outcomes with less environmental water. Forecasts of streamflow and runoff will be critical to maximise outcomes and minimise unintended impacts, such as inundation above agreed limits. Water forecasting in Australia is challenging due to climatical diversity, highly variable rainfall, and ephemeral streams. Overcoming these challenges is important to provide skilful and reliable probabilistic forecasts at a range of temporal and spatial scales. These forecasts will contribute to a more equitable distribution of water resources. Track Coordinator - Financial Engineering: Ben Feng (University of Waterloo), Guangwu Liu (City University of Hong Kong) Financial Engineering Importance Sampling in Financial Engineering Chair: Kun Zhang (City University of Hong Kong) Importance Sampling for CoVaR Estimation Guangxin Jiang (Harbin Institute of Technology) and Xin Yun (Shanghai University) Abstract Abstract Measuring systemic risk has been an important problem in financial risk management. The CoVaR, as one of the commonly used systemic risk measures, could capture the tail dependency of the losses between financial institutions and financial systems. CoVaR is estimated via several statistical methods like quantile regression. In this paper, considering the complexity of the constituent securities in the financial institution and financial systems, we propose a simulation approach to estimate the CoVaR. We investigate the use of importance sampling to reduce the variance of the CoVaR estimator, and propose an efficient importance sampling distribution based on large deviation principles. We also illustrate the effectiveness of our approach via numerical experiments. Combining Retrospective Approximation with Importance Sampling for Optimising Conditional Value at Risk Anand Deo, Karthyek Murthy, and Tirtho Sarker (Singapore University of Technology and Design) Abstract Abstract This paper investigates the use of retrospective approximation solution paradigm in solving risk-averse optimization problems effectively via importance sampling (IS). While IS serves as a prominent means for tackling the large sample requirements in estimating tail risk measures such as Conditional Value at Risk (CVaR), its use in optimization problems driven by CVaR is complicated by the need to tailor the IS change of measure differently to different optimization iterates and the circularity which arises as a consequence. The proposed algorithm overcomes these challenges by employing a univariate IS transformation offering uniform variance reduction in a retrospective approximation procedure well-suited for tuning the IS parameter choice. The resulting simulation based approximation scheme enjoys both the computational efficiency bestowed by retrospective approximation and logarithmically efficient variance reduction offered by importance sampling. Portfolio Risk Measurement via Stochastic Mesh with Average Weight Ben Feng (University of Waterloo), Guangwu Liu (City University of Hong Kong), and Kun Zhang (Renming University of China) Abstract Abstract Nested simulation has been widely used in the risk measurement of derivative portfolio. The convergence rate of the mean squared error (MSE) of the standard nested simulation is $k^{-2/3}$, where $k$ is the simulation budget. To speed the convergence, we propose a stochastic mesh approach with average weight to portfolio risk measurement under the nested setting. We establish the asymptotic properties of the stochastic mesh estimator for portfolio risk, including the bias, variance and then the MSE. In particular, we show that the MSE converges to zero at a rate of $k^{-1}$, which is the same as that under the non-nested setting. The proposed method also allows for path dependence of financial instruments in the portfolio. Numerical experiments show that the proposed method performs well. Financial Engineering Modeling and Estimating Financial and Actuarial Risks Chair: Ben Feng (University of Waterloo) Metamodeling for Variable Annuity Valuation: 10 Years Beyond Kriging Guojun Gan (University of Connecticut) Abstract Abstract Variable annuities are retirement insurance products created by insurance companies that contain financial guarantees. To mitigate the financial risks associated with these guarantees, insurance companies have adopted dynamic hedging, which is a risk management technique. However, dynamic hedging is associated with computationally intensive valuations of variable annuity policies. Recently, metamodeling approaches have been developed to address the computational problems. A typical metamodeling approach consists of two components: an experimental design method and a metamodel. In this paper, we give a survey of metamodeling approaches developed in the past ten years. For each metamodeling approach, we will describe the experimental design method and the metamodel. Sequential Nested Simulation for Estimating Expected Shortfall Ou Jessica Dang (Nanyang Technological University) and Ben Feng (University of Waterloo) Abstract Abstract Expected shortfall is a tail risk measure widely used in the financial industry. Estimating expected shortfall of financial contracts often requires a nested Monte Carlo simulation, which is computationally burdensome. In a nested simulation, one first simulates a set of plausible evolution of underlying risk factors, or the scenarios. Then inner simulations are run to evaluate the financial positions in each of the outer scenarios. In this work, we propose a nested simulation procedure that sequentially allocates a fixed computation budget to accurately estimate the expected shortfall of a financial asset. The goal is to concentrate computation effort on the tail scenarios, which are the most relevant scenarios in expected shortfall estimation. Numerical experiments show that that our proposal significantly improves the accuracy of the estimation when compared to a standard nested simulation with the same computation budget. Quantile Sensitivity Estimation through Delta Family Method Zhenyu Cui (Stevens Institute of Technology) and Kailin Ding (Chinese Academy of Sciences) Abstract Abstract In this paper, we consider the estimation of the quantile sensitivity through Monte Carlo simulations. We first propose a new representation of the quantile by writing it as an expectation involving Dirac Delta payoff functions. Then we consider two alternative approximations for the Dirac Delta function, one is the Delta sequence, and the other is through orthogonal series. Then we derive quantile sensitivity estimators by combining them with the conditional expectation representation derived in \cite{hong2009estimating}. Numerical examples demonstrate the accurateness and efficiency of the proposed method, and compare with the existing literature. Financial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) A Proximal Algorithm for Sampling from Non-smooth Potentials Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) and Yongxin Chen (Georgia Institute of Technology) Abstract Abstract In this work, we examine sampling problems with non-smooth potentials and propose a novel Markov chain Monte Carlo algorithm for it. We provide a non-asymptotical analysis of our algorithm and establish a polynomial-time complexity $\tilde {\cal O}(M^2 d {\cal M}_4^{1/2} \varepsilon^{-1})$ to achieve $\varepsilon$ error in terms of total variation distance to a log-concave target density with 4th moment ${\cal M}_4$ and $M$-Lipschitz potential, better than most existing results under the same assumptions. Our method is based on the proximal bundle method and an alternating sampling framework. The latter framework requires the so-called restricted Gaussian oracle, which can be viewed as a sampling counterpart of the proximal mapping in convex optimization. One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant. A Dynamic Credibility Model with Self-excitation and Exponential Decay Himchan Jeong (Simon Fraser University) and BIN ZOU (University of Connecticut) Abstract Abstract This paper proposes a dynamic credibility model for claim count that extends the benchmark Poisson generalized linear models (GLM) by incorporating self-excitation and exponential decay features from Hawkes processes. Under the proposed model, a recent claim has a bigger impact on the credibility premium than an outdated claim. Empirical results show that the proposed model outperforms the Poisson GLM in both in-sample goodness-of-fit and out-of-sample prediction. A Computational Study of Probabilistic Branch and Bound with Multilevel Importance Sampling Hao Huang (Yuan Ze University); Pariyakorn Maneekul, Danielle F. Morey, and Zelda B. Zabinsky (University of Washington); and Giulia Pedrielli (Arizona State University) Abstract Abstract Probabilistic branch and bound (PBnB) is a partition-based algorithm developed for level set approximation, where investigating all subregions simultaneously is a computational costly sampling scheme. In this study, we hypothesize that focusing branching and sampling on promising subregions will improve the efficiency of the PBnB algorithm. Two variations to Original PBnB are proposed, Multilevel PBnB and Multilevel PBnB with Importance Sampling. Multilevel PBnB focuses its branching on promising subregions that are likely to be maintained or pruned, as opposed to Original PBnB that branches more subregions. Multilevel PBnB with Importance Sampling attempts to further improve this efficiently by combining focused branching with a posterior distribution that updates iteratively. We present numerical experiments using benchmark functions to compare the performance of each PBnB variation. Grand Challenges Grand Challenges in Simulation Application Domains Chair: Oliver Rose (University of the Bundeswehr Munich) Grand Challenges in Modeling and Simulation of Complex Manufacturing Systems John Fowler (Arizona State University), Oliver Rose (Universität der Bundeswehr München), and Tae-Eog Lee (Korea Advanced Institute of Science & Technology) Abstract Abstract As a result of a 2002 Dagstuhl Seminar (Fujimoto et al., 2002), Fowler and Rose (2004) discussed grand challenges in modeling and simulation of complex manufacturing systems. In this presentation, we will review progress on the grand challenges identified in Fowler and Rose(2004) and point out some new challenges. Grand Challenges in Healthcare Modeling and Simulation: a Panel Discussion Christine S.M. Currie (University of Southampton); David Matchar (Duke-NUS Medical School; Duke University, USA); Maria E. Mayorga (North Carolina State University); Thomas Monks (Exeter University); and Alexander R. Rutherford (Simon Fraser University) Abstract Abstract Since the start of the COVID-19 pandemic the importance of healthcare to all aspects of society has become much more obvious. Simulation modeling, particularly of infectious diseases has also become more accepted by the general population and decision makers. The challenges within healthcare systems are more pronounced, not simply due to COVID-19 but also due to other societal factors such as the ageing populations in Europe, North America and parts of Asia. Our aim is to take a global view, and we will run a panel discussion at the conference drawing on expertise from across the world and from different areas of healthcare to identify the grand challenges that researchers working in healthcare simulation should address over the next ten years. Grand Challenges in Modelling and Simulation of Logistics Systems Pankaj Sharma, Haobin Li, and Ek Peng Chew (National University of Singapore) Abstract Abstract As a result of a 2002 Dagstuhl Seminar (Fujimoto et al., 2002), Fowler and Rose (2004) discussed grand challenges in the modelling and simulation of complex manufacturing systems. In this presentation, we discuss the applicability of the grand challenges identified in Fowler and Rose (2004) to the logistics systems and review their progress. In addition, we bring out some new challenges that impede the application of modelling and simulation in logistics systems. We will also discuss the possible measures and developments that will help counter these challenges. Grand Challenges Grand Challenges in Simulation Modeling Methodology Chair: Stewart Robinson (Newcastle University) Grand Challenges in High Performance Simulation Simon J. E. Taylor (Brunel University London) Abstract Abstract Grand Challenges in Modelling & Simulation have been discussed several times over the past twenty or so years. This presentation focuses on progress made towards the Grand Challenge that was originally identified at the 2002 Dagstuhl Seminar (Fujimoto et al 2002), the Development of Real-Time Simulation-Based Problem-Solving Capability or, more generally, the Grand Challenge of High Performance Simulation. Progress towards this will be reviewed and a contemporary view how this might be realized that reflects twenty years of progress in advanced computer architectures will be discussed. Grand Challenges in Hybrid Modelling Navonil Mustafee (University of Exeter), Andreas Tolk (The MITRE Corporation), Steffen Straßburger (Ilmenau University of Technology), Varun Ramamohan (Indian Institute of Technology Delhi), and Alison Harper (University of Exeter Medical School) Abstract Abstract The combined application of simulation techniques such as discrete-event simulation, system dynamics and agent-based simulation is referred to as Hybrid Simulation (HS). Distinct from HS, Hybrid Modelling (HM) is the use of simulation in conjunction with theories and frameworks, methodologies, tools and techniques from Social Sciences, Engineering, Computer Science and other disciplines. Like M&S, these approaches were developed, tested, applied and refined and continue to evolve within disciplinary confines, and there exists an opportunity for the realization of synergy through the combined application of techniques! There are several examples of HM studies using simulations with analytical models or Soft OR approaches, however, such cross-disciplinary inquiry lacks saturation. Through the Grand Challenges panel, we seek to generate interest in HM research and practice, which embraces the most appropriate approaches irrespective of their disciplinary origins and paves the way for increased collaboration among scientific disciplines. Track Coordinator - Healthcare Applications: Bjorn Berg (University of Minnesota), Christine Currie (University of Southampton), Masoud Fakhimi (University of Surrey) Healthcare Applications Operational Planning for Critical Patients Chair: Vishnunarayan Girishan Prabhu (Clemson University) Simulation Model of a Multi-Hospital Critical Care Network Alexander Rutherford, Samantha Zimmerman, Mina Moeini, and Rashid Barket (Simon Fraser University); Steve Ahkioon (Vancouver General Hospital); and Donald Griesdale (University of British Columbia) Abstract Abstract We develop a discrete event simulation model for a network of eight major intensive care units (ICUs) in British Columbia, Canada. The model also contains high acuity units (HAUs) that provide critical care to patients that cannot be cared for in a general medical ward, but do not require the full spectrum of care available in an ICU. We model patient flow within the ICU and HAU for each of the hospitals, as well as patient transfers to address ICU capacity. Included in the model is early discharge from ICU to HAU, sometimes called "bumping", when the ICU is full, as well as ICU overflow beds. The simulation model, which is calibrated using the British Columbia Critical Care Database, will be used to support planning for critical care capacity under endemic and seasonal COVID-19. How Does Imaging Impact Patient Flow in Emergency Departments? Vishnunarayan Girishan Prabhu (University of North Carolina at Charlotte); Kevin Taaffe and Marisa Shehan (Clemson University); and Ronald Pirrallo, William Jackson, Michael Ramsay, and Jessica Hobbs (Prisma Health-Upstate) Abstract Abstract Emergency Department (ED) overcrowding continues to be a public health issue as well as a patient safety issue. The underlying factors leading to ED crowding are numerous, varied, and complex. Although lack of in-hospital beds is frequently attributed as the primary reason for crowding, ED's dependencies on other ancillary resources, including imaging, consults, and labs, also contribute to crowding. Using retrospective data associated with imaging, including delays, processing time, and the number of image orders, from a large tier 1 trauma center, we developed a discrete event simulation model to identify the impact of the imaging delays and bundling image orders on patient time in the ED. Results from sensitivity analysis show that reducing the delays associated with imaging and bundling as few as 10% of imaging orders for certain patients can significantly (p-value < 0.05) reduce the time a patient spends in the ED. Healthcare Applications Discrete-Event Simulation Modeling to Address Operational Questions in Healthcare Chair: Fumiya Abe-Nornes (University of Michigan) BloodChainSim – a Simulation Environment to Evaluate Digital Innovations in Blood Supply Chains Dennis Horstkemper (European Research Center of Information Systems, University of Münster) and Melanie Reuter-Oppermann (Software & Digital Business Group, TU Darmstadt) Abstract Abstract Blood Supply Chains are an essential element of healthcare systems. Crisis situations, such as droughts, epidemics and political conflicts, put an increasing innovation pressure onto blood establishments to enable a steady delivery of blood products at all times. Simulation provides the means to test out digital innovations for blood supply chains regarding their operational efficiency and the expected costs. However, past simulation tools for blood supply chains usually do not encompass all actors of the supply chain or are unable to simulate dynamic decision making. We developed a hybrid agent-based and discrete event-based simulation toolkit incorporating different optimization problems to enable such decision support and applied it on two use-cases in cooperation with South-African blood establishments. A Reusable Discrete Event Simulation Model For Improving Orthopedic Waiting Lists Laura Boyle (Queen's University Belfast) and Mark Mackay (James Cook University) Abstract Abstract Waiting lists for orthopedic treatment in Australia are lengthy and frequently breach time-based performance targets, due to high demand and limited resources. This paper presents a discrete event simulation model of an orthopedic treatment pathway in Australia from the point of referral to surgical intervention for knee pain patients. The simulation model was used to investigate strategies for reducing the backlog of patients and average waiting time for initial orthopedic review, showing that a reduction of 34% in average waiting time can be achieved by adding three additional appointments per week. The model demonstrates that introduction of community-based rehabilitation early in the patient journey can significantly reduce waiting lists and improve patient recovery without the need for surgical intervention. This simulation model has been designed to be reusable for other types of orthopedic conditions and for use in other health services. Using Discrete-Event Simulation to Analyze the Impact of Variation on Surgical Training Programs Fumiya Abe-Nornes, Samir Agarwala, Nathan Smith, Rachel Zhang, Amy Cohn, Angela Thelen, Rishindra Reddy, and Brian George (University of Michigan) Abstract Abstract In this paper, we use discrete-event simulation in an attempt to highlight the consequences of variability in surgical training. Under the current training model, case volume minimums are being used as a surrogate measure of a surgical trainee’s competency for a given operation. However, this assumes that 1) learning is a binary measure, 2) there is no variability in training opportunities, and 3) all trainees learn at the same speed. Our model addresses these variables by allowing the user to manipulate the distribution of continuous learning curves and arrival rates, simulating the competency outcomes of a surgical training model. The results demonstrate that when increasing the variability in learning speeds or decreasing the training opportunities, competency outcomes for common procedures such as appendectomies remain relatively unaffected. However, for rarer procedures like mediastinoscopies, these variabilities result in a greater proportion of decreasingly competent trainees, potentially endangering patient safety. Healthcare Applications Simulation Optimization-based Methodology for Healthcare Chair: Lambros Viennas (University of Surrey, Bridgnorth Aluminium Ltd.) A Hierarchical Deep Reinforcement Learning Approach for Outpatient Primary Care Scheduling Mona Issabakhsh (Georgetown University) and Seokgi Lee (Youngstown State University) Abstract Abstract Primary care clinics suffer from high patient no-shows and late cancellation rates. Admitting walk-in patients to primary care settings helps improve clinic’s utilization rates and accessibility, therefore, following an efficient walk-in patient admission policy is highly prominent. This research applies a learning-based outpatient management system investigating patient admission and assignment policies to improve the operational efficiency in a general outpatient clinic with high no-show and cancellation rates and daily walk-in requests. Contrary to the general outpatient literature, our results show only 30% of the walk-in requests should be admitted to minimize the wait time of already admitted patients and providers over time. Our results suggest assigning more than 50% of the available slots of a clinic session to punctual patients who have an appointment, to minimize long-run costs. The model and the results, however, are generated based on specific data and parameters, and cannot be directly generalized to other clinics. A Simulation-Optimization Framework To Improve The Organ Transplantation Offering System Ignacio Ismael Erazo Neira, David Goldsman, Pinar Keskinocak, and Joel Sokol (Georgia Institute of Technology) Abstract Abstract We propose a simulation-optimization-based methodology to improve the way that organ transplant offers are made to potential recipients. Our policy can be applied to all types of organs, is implemented starting at the local level, is flexible with respect to simultaneous offers of an organ to multiple patients, and takes into account the quality of the organs under consideration. We describe in detail our simulation-optimization procedure and how it uses data from the Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipients to inform the decision-making process. In particular, the optimal batch size of offers is determined as a function of location and certain organ attributes. We present results using our liver and kidney models, where we show that, under our policy recommendations, more organs are utilized and the required times to allocate the organs are reduced over the one-at-a-time offer policy currently in place. Simulation-Optimization to Distinguish Optimal Symptom Free Waiting Period for Return-to-play Decisions in Sport-related Concussion Gian-Gabriel Garcia (Georgia Institute of Technology), Lauren L. Czerniak and Mariel S. Lavieri (University of Michigan), Spencer W. Liebel (University of Utah School of Medicine), Michael A. McCrea (Medical College of Wisconsin), Thomas W. McAllister (Indiana University School of Medicine), Paul F. Pasquina (Uniformed Services University of the Health Sciences), and Steven P. Broglio (University of Michigan) Abstract Abstract Approximately 1.6-3.8 million sport and recreation concussions occur annually. Yet, there is currently no universal agreement on when an athlete should be permitted to unrestricted play after being diagnosed with a concussion. Simulation-optimization provides a tractable method to optimize the length of the symptom-free waiting period (SFWP), i.e., the number of consecutive days after starting the return-to-play protocol that an athlete must be symptom-free before they are permitted to unrestricted play. We develop a two-part treatment initiation/cessation simulation model consisting of a (1) Controlled Hidden Markov Model [pre-return-to-play] and (2) Uncontrolled Markov Chain [post-return-to-play] and apply four simulation-optimization methods (Crude Monte Carlo, 2-Stage Decomposition, NSGS, KN) to optimize the SFWP. For collegiate men’s football and women’s soccer, we find an optimal SFWP of approximately 2 and 3.5 weeks, respectively. This research provides clinical decision-support for return-to-play decisions. Healthcare Applications Simulation Models Evaluating Patient Flow in Different Care Settings Chair: S. M. Niaz Arifin (University of Notre Dame) Discrete Event Simulation to Evaluate Shelter Capacity Expansion Options for LGBTQ+ Homeless Youth Yaren Bilge Kaya, Sophia Mantell, and Kayse Lee Maass (Northeastern University); Renata Konrad, Andrew C. Trapp, and Geri L. Dimas (Worcester Polytechnic Institute); and Meredith Dank (New York University) Abstract Abstract The New York City (NYC) youth shelter system provides housing, counseling, and other support services to runaway and homeless youth and young adults (RHY). These resources reduce RHY’s vulnerability to human trafficking, yet most shelters are unable to meet demand. This paper presents a Discrete Event Simulation (DES) model of a crisis-emergency and drop-in center for LGBTQ+ youth in NYC, which aims to analyze the current operations and test potential capacity expansion interventions. The model uses data from publicly available resources and interviews with service providers and key stakeholders. The simulated shelter has 66 crisis-emergency beds, offers five different support services, and serves on average 1,399 LGBTQ+ RHY per year. The capacity expansion interventions examined in this paper are adding crisis-emergency beds and psychiatric therapists. This application of DES serves as a tool to communicate with policymakers, funders, and service providers—potentially having a strong humanitarian impact. Simulation Model for Planning Dental Caries Prevention at the Regional Level Maria Hajłasz and Bożena Mielczarek (Wrocław University of Science and Technology) Abstract Abstract Dental caries is called the disease of the 21st century. The best proven way to prevent it is to implement preventive programs. In planning such programs, it is important to select a configuration of services that will arrest dental caries but also remain within the resource capabilities of the relevant regional authorities. The aim of this paper is to present a discrete event simulation model to test different scenarios of dental caries prevention addressed to children in a sample school in southwest Poland. By conducting simulation experiments it is possible to select a scenario assuming fluoridation, sealing of first molars, and education as services that allow good results in the oral health of students, while using resources at a level that does not exceed the full-time annual employment in Poland: 491,7 hours per year for dentist and 285,9 hours per year for nurse. A Simulation-Based Approach For Assessing The Impact Of Uncertainty On Patient Waiting Time In The Operating Room Leah Jana Rifi (IMT Mines Albi; Univ. Grenoble Alpes, Grenoble INP, GSCOP); Franck Fontanili and Clea Martinez (IMT Mines Albi); Maria Di Mascolo (Univ. Grenoble Alpes, CNRS, Grenoble INP); and Virginie Fortineau (LAMIH, CNRS; GIE Vivalto Santé) Abstract Abstract Demand for surgical care is rising worldwide, making the organization of the operating room (OR) a topic of strong interest. During the last two decades, the number of papers on methods for OR planning and scheduling under uncertainty has increased significantly. However, most hospitals neglect this aspect, and use deterministic approaches to schedule their surgical interventions. This leads us to the following research question: “How can discrete-event simulation help assess the impact of uncertainty on patient waiting time in the OR?” To answer this question, we suggest a 3-step methodology: (1) building the deterministic model of the studied OR, (2) implementing uncertainties on activity durations, patient arrival times and patient care requirements, and (3) experimenting with different uncertainty-related scenarios and analyzing the results. We have applied this methodology to a use-case inspired from our partner’s OR: Hôpital Privé de La Baie, from the Vivalto Santé French health group. Healthcare Applications Discrete-Event Simulation Models to Inform Healthcare Decisions Chair: Jung Hyup Kim (University of Missouri) Simulation And Analysis Of Disruptive Events On A Deterministic Home Health Care Routing And Scheduling Solution Guillaume Dessevre (IMT Mines Albi), Liwen Zhang (Berger-Levrault), Cléa Martinez (IMT Mines Albi), Christophe Bortolaso (Berger-Levrault), and Franck Fontanili (IMT Mines Albi) Abstract Abstract Due to the aging of populations and a desire to relieve the growing demand on medical structures in recent years, the demand for home health care services has been increasing, and the Home Health Care Routing and Scheduling Problem (HHCRSP) is now among the most intensely studied optimization problems. However, most studies on the HHCRSP are based on deterministic models that do not consider any disruptions that may compromise the execution of the schedules. In this paper, we analyze the impact of different sources of disturbances on a deterministic schedule : delays at the start of the route, variability of travel time, and variability of service processing time. Simulation has been chosen because it helps to easily model and analyze complex environments with several sources of variability. Graphical representations and an analysis of variance are presented to interpret the results, leading to several managerial insights and openings for future research. Fatigue-recovery Simulation Model to Analyze the Impact of Nursing Activities on Fatigue Level in an Intensive Care Unit Vitor de Oliveira Vargas, Jung Hyup Kim, Laurel Despins, and Alireza Kasaie (University of Missouri) Abstract Abstract The main purpose of the current study is to create a simulation model capable of estimating nurses’ fatigue levels during a daily shift in an Intensive Care Unit (ICU). The model has been statistically tested and validated by comparing the time study observation data. According to the simulation results, the average fatigue level of ICU nurses was 62.7% ~ 63.6% in a normal workload condition. When the ICU nurses experienced a high workload of regular primary care in the model, the average fatigue level was increased by 5.5% ~ 11.7% compared to the normal condition. For peer support, the fatigue level was increased by 5.7% ~ 7.3%. The main contribution of this work is that the model could provide a new way to estimate the nurses’ fatigue levels in different workload conditions and establish specific nurse-patient ratios dynamically to improve patient care in a medical ICU. Healthcare Applications Simulation Models to Inform Healthcare Decisions I Chair: Alison Harper (University of Exeter) Could Earlier Availability of Boosters and Pediatric Vaccines Have Reduced Impact of COVID-19? Erik Rosenstrom, Julie Ivy, Maria Mayorga, and Julie Swann (North Carolina State University) Abstract Abstract The objective is to evaluate the impact of the earlier availability of COVID-19 vaccinations to children and boosters to adults in the face of the Delta and Omicron variants. We employed an agent-based stochastic network simulation model with a modified SEIR compartment model populated with demographic and census data for North Carolina. We found that earlier availability of childhood vaccines and earlier availability of adult boosters could have reduced the peak hospitalizations of the Delta wave by 10% and the Omicron wave by 42%, and could have reduced cumulative deaths by 9% by July 2022. When studied separately, we found that earlier childhood vaccinations reduce cumulative deaths by 2,611 more than earlier adult boosters. Therefore, the results of our simulation model suggest that the timing of childhood vaccination and booster efforts could have resulted in a reduced disease burden and that prioritizing childhood vaccinations would most effectively reduce disease spread. The Issue of Trust and Implementation of Results in Healthcare Modeling and Simulation Studies Alison Harper (University of Exeter Medical School) and Navonil Mustafee and Mike Yearworth (University of Exeter Business School) Abstract Abstract The issue of real world implementation of the results of modeling and simulation (M&S) studies in healthcare has been the focus of research interest for decades. Using a model of trust which focuses on a three-way conceptualization of trust between modelers, the model and stakeholders across the M&S study process, this paper investigates reported project features of a subset of healthcare studies that describe results implementation. Differentiating between credibility and trust, the paper provides a preliminary evaluation of aspects of implemented studies that can be mapped to the trust model. The findings align with previous empirical results that have investigated implementation in healthcare M&S, and support the value of the Trust model for structuring an evaluation or implementation plan for M&S studies in the healthcare domain. Workshift Scheduling Using Optimization and Process Mining Techniques: An Application in Healthcare Alberto Guastalla, Emilio Sulis, and Roberto Aringhieri (Università degli Studi di Torino) and Stefano Branchi, Chiara Di Francescomarino, and Chiara Ghidini (Fondazione Bruno Kessler) Abstract Abstract This paper aims at supporting healthcare organizations in automatically generating rostering plans by combining optimization and process mining approaches. Based on event logs from the information system, we propose a decision support system that simulates work schedules. Managing staff workshifts is a complicated issue to solve especially in large and complex organizations such as those in the healthcare sector. A number of different factors can be taken into account, i.e., operative constraints, personal preferences and regulations must be considered in order to produce the best plan. In our approach we exploit the idea for which the patterns included in the realised rostering plans could represent the personal needs and the unspoken habits of the personnel. Based on this remark, we propose a three-step methodological framework -- rostering optimization, pattern extraction, pattern adaptation -- that it was applied to a real-world scenario. Healthcare Applications Simulation Models to Inform Healthcare Decisions II Chair: Georgiy Bobashev (RTI International) Utilizing Simulation to Update Routine Diabetic Retinopathy Screening Policies Poria Dorali, Rosangel Limongi, and Fariha Kabir Torsha (University of Houston); Christina Y. Weng (Baylor College of Medicine); and Taewoo Lee (University of Pittsburgh) Abstract Abstract Diabetic retinopathy (DR) is the leading cause of blindness for working-age US adults. While comprehensive screening examinations detect most early-stage DR cases, only 50-60% of diabetic patients adhere to the current annual screening guidelines. Recently, teleretinal imaging (TRI) has emerged as an accessible screening tool for patients with limited access. However, there exists no well-established guideline that incorporates TRI-based screening for such patients. We develop a Monte Carlo simulation model to replicate a safety-net system patient population using electronic medical record data from the Harris Health System (Houston, TX) and examine cost and health benefits of various TRI-based screening policies. We conduct sensitivity analysis to study the impact of patient-specific factors including age, A1C level, and screening adherence on screening policy performance. Our findings support TRI-based screening for patients with limited access and highlight the significant role of patient-specific factors in determining cohort-level screening policies. Virtual Opioid User: Reproducing Opioid Use Phenomena with a Control Theory Model Alexander Preiss, Anthony Berghammer, and Georgiy Bobashev (RTI International) Abstract Abstract Few attempts have been made to simulate the complex natural history of opioid use disorder. We developed a model to simulate an agent’s opioid use over 15-minute time steps. We followed the principles of control theory and opponent-process theory, formalizing representations of several processes (tolerance, effect, craving) as weighted integrations of opioid concentration, which was modeled with a pharmacokinetic equation. We calibrated our model to reproduce five qualitative opioid use trajectories commonly observed in the literature. We demonstrate how a relatively simple control theory approach can reproduce many of the key characteristics of real-world opioid use. Track Coordinator - Hybrid Simulation: Andrew J. Collins (Old Dominion University), Caroline C. Krejci (The University of Texas at Arlington), Antuela Tako (Loughborough University, University of Kent) Hybrid Simulation Hybrid Simulation with Advanced Technology Chair: Le Khanh Ngan Nguyen (University of Strathclyde) Explainable AI for Data Farming Output Analysis: A Use Case for Knowledge Generation through Black-Box Classifiers Niclas Feldkamp, Jonas Genath, and Steffen Strassburger (Technische Universität Ilmenau) Abstract Abstract Data farming combines large-scale simulation experiments with high performance computing and sophisticated big data analysis methods. The portfolio of analysis methods for those large amounts of simulation data still yields potential to further development, and new methods emerge frequently. Among the most interesting are methods of explainable artificial intelligence (XAI). Those methods enable the use of black-box-classifiers for data farming output analysis, which has been shown in a previous paper. In this paper, we apply the concept for XAI-based data farming analysis on a complex, real world case study to investigate the suitability of such concept in a real world application, and we also elaborate on which black-box classifiers are actually the most suitable for large-scale simulation data that accumulates in a data farming project. Simulating Prosumer Data Trading: Testing a Blockchain Smart Contract Based Control David Bell and Naeem Bilal (Brunel University London) Abstract Abstract Online data trading has grown alongside the ever-increasing use of digital services. Industries are accruing the benefits of this data access to perform mission-critical tasks by analyzing available data for greater insight. Unsurprisingly, data trading has not focused on improved data seller protection with preferences and controls. The objective of this paper is to explore the enforcement of seller preferences within smart contracts using blockchain technology. Data trading is only possible when a buyer satisfies the conditions predefined by the seller. Geographic location, type or size of buyer’s company are some examples of seller preferences. A preferences algorithm provides an automated contract between seller and buyer without the involvement of any broker or third party. Hybrid simulation (HS) methods are used to test and evaluate the viability of our novel data control approach. Hybrid Simulation Hybrid Simulation Methodology Chair: David Bell (Brunel University London); Antuela Tako (Loughborough University, University of Kent) An MVP Approach to Developing Complex Hybrid Simulation Models William Jones, Philip Gun, and Mehdi Foumani (University of Sydney) Abstract Abstract Simulation is increasingly applied to capture highly complex systems and tackle ever more complex problems. We present a novel framework that aids modellers in undertaking long, highly complex simulation studies. Our framework provides a structured method guiding how a study will develop, aiding modellers to conceptualise a suitable model and plan how it will be coded. The approach incorporates the principle of minimum viable product development, a process directed toward satisfying the minimum requirements of the client as quickly as possible by deconstructing them into defined steps. The framework has wide applications, but, its modularity is particularly well suited to complex hybrid simulation projects which integrate multiple techniques into a single model. Our framework ensures value is generated for the stakeholders early in a project. It facilitates agile development involving stakeholders closely in the inevitably long project life-cycle, responding to their feedback as the project progresses to maximise value. Interfaces between SD and ABM Modules in a Hybrid Model Le Khanh Ngan Nguyen, Susan Howick, and Itamar Megiddo (University of Strathclyde) Abstract Abstract Modelers in various disciplines have applied system dynamics (SD) and agent-based models (ABM) to support decision-makers in managing complex adaptive systems. Combining these methods in a hybrid simulation offers an opportunity to overcome the challenges that modelers face using SD or ABM alone. It also provides a complementary view and rich insight into the problems that modelers investigate. Hence, this approach can offer solutions to a plethora of systems problems. One of the limitations of existing frameworks that guide the process of combining SD and ABM is the lack of detailed guidance describing how the two methods can interact and exchange information. This paper provides guidance for interfacing these simulation modeling methods in a hybrid simulation. In this guidance, we describe interface approaches to exchanging information for different types of information flow between SD and ABM. From Conceptualization of Hybrid Modelling & Simulation to Empirical Studies in Hybrid Modelling Navonil Mustafee and Alison Harper (University of Exeter) and Masoud Fakhimi (University of Surrey) Abstract Abstract The combined application of simulation techniques is referred to as Hybrid Simulation (HS). The primary focus of HS is on the integrative application of techniques developed within the M&S field for better representation of the system under scrutiny. In contrast, Hybrid Modelling (HM) focuses on cross-cutting hybrid models that combine theories, frameworks, techniques and established research approaches that have been tried and tested and have existed as extant knowledge within academic disciplines such as Computing Science, Operations Research and Social Sciences. In previous work, the authors present a conceptual framework and classification of HS and HM. However, the translation of this framework toward developing HM models can be challenging. To address this gap, we discuss five empirical HM studies conducted by the authors and map them to the existing classification. The paper will serve as a reference point for developing HM studies that extend the theory and practice of M&S. Hybrid Simulation Hybrid Simulation Applications Chair: David Bell (Brunel University London) Simulation-based Pricing for a Rideshare Model Wee Meng Yeo (University of Glasgow) and Boray Huang (University of Dundee) Abstract Abstract This case study contributes to modelling and simulation suited for contexts where ridesharing services are rolled-out in areas characterised by low income, poor accessibility to public transport and/or hard-to-reach essential services. It is inspired by the first author’s experience in working with a U.K-based company under a completed industrial project “Pricing and Incentives for New Transport Solutions in Towns and Small Cities”. Applying concepts from revenue management and industrial organisation (a sub-field in economics), we develop a demand model and pricing formulae. These are combined with simulation to understand key market levers for juggling both profitability and affordability. We find that the trade-off between profitability and affordability is an interplay of factors in potential market size, competition dynamics, and passengers’ willingness to pay (WTP). We also hope to share learnings and reflections from the academic-industry collaborative project. Hybrid Simulation in Healthcare: A Review of The Literature Eyup Kar and Tillal Eldabi (University of Surrey) and Masoud Fakhimi (University Surrey) Abstract Abstract The use of Hybrid Simulation (HS) increased as problems have become more complex and multidimensional, with a particular focus on healthcare systems. Such complexities make it challenging for single simulation models to provide the right support for decision-making. This article reports on a preliminary review of the literature and investigates the prevalence and utilization of HS in healthcare. Thirty-three relevant papers were found in the literature, including application papers, frameworks, and review papers. Our review categorizes the MS techniques employed and analyses the application type, software packages, trends, opportunities, and challenges of HS in healthcare. Findings show that combining Discrete Event Simulation and System Dynamics is the most common approach to developing HS models in healthcare. However, the popularity of combining Agent-Based Simulation with others is on the rise. Current limitations of the literature and opportunities for future research are discussed. FACS-CHARM: A Hybrid Agent-Based and Discrete-Event Simulation Approach for COVID-19 Management at Regional Level Anastasia Anagnostou, Derek Groen, Simon J.E. Taylor, Diana Suleimenova, Nura Abubakar, Arindam Saha, Kate Mintram, Maziar Ghorbani, Habiba Daroge, Tasin Islam, Yani Xue, Edward Okine, and Nana Anokye (Brunel University London) Abstract Abstract Pandemics have huge impact on all aspect of people’s lives. As we have experienced during the Coronavirus pandemic, healthcare, education and the economy have been put under extreme strain. It is important therefore to be able to respond to such events fast in order to limit the damage to the society. Decision-makers typically are advised by experts in order to inform their response strategies. One of the tools that is widely used to support evidence-based decisions is modeling and simulation. In this paper, we present a hybrid agent-based and discrete-event simulation for the Coronavirus pandemic management at regional level. Our model considers disease dynamics, population interactions and dynamic ICU bed capacity management and predicts the impact of various public health preventive measures on the population and the healthcare service. Hybrid Simulation Hybrid Simulation in Human Systems Chair: Andrew J. Collins (Old Dominion University) A System Dynamics Model for Studying the Resiliency of Supply Chains and Informing Mitigation Policies for Responding to Disruptions William Bland, Andrew Hong, Lauren Rayson, Jennifer Richkus, and Scott Rosen (MITRE Corporation) Abstract Abstract Economic shocks are unanticipated events that have widespread impact on an economy and can lead to supply chain disruptions that propagate from one region to another. The COVID-19 pandemic is a recent example. Simulations have been applied to study the impact of COVID-19 shocks on supply chains at the macro level using various approaches. This research has developed a hybrid System Dynamics and Input/Output simulation to model the economic impact of various types of supply chain disruptions. The hybrid model provides results that match historical performance of the U.S. economy under COVID-19 shocks and provides reasonable results when applied to investigate U.S. dependence on foreign trade. Its graphical nature also supports a decision support tool that will allow policymakers to explore the costs and benefits of various policy decisions designed to mitigate the impact of a broad set of potential supply chain disruptions. A Hybrid Model of Multiple Team Membership and its Impacts on System Design Andrew James Collins and Sheida Etemadidavan (Old Dominion University) Abstract Abstract Within an organization, Multiple Team Membership (MTM) occurs when employees are working in multiple teams simultaneously. Approximately 65% of all knowledge workers are working in an MTM environment; however, research into MTM has only begun to emerge over the last decade, with no application of simulation to date. The extant research studies, using human subject studies, have focused on the impact of utilizing MTM on productivity and effectiveness of individuals or teams, effectively looking at micro-level phenomena. This paper outlines the first attempt to understand the macro-level phenomena of MTM using quantitative means. The scenario under consideration is a large system design project that requires multiple interdependent teams of engineering designers. An agent-based simulation of this scenario was created. The results from a simulation experiment indicate that using MTM helps in more complex design projects, i.e., it increases the performance of finding a feasible design solution when coupling is introduced. Modelling and Simulation of Cyber-Physical Systems using an extensible Co-Simulation Framework Jan Reitz, Tobias Osterloh, and Jürgen Roßmann (RWTH Aachen University) Abstract Abstract Modern systems increasingly utilize software and especially computer networking to create new functionality, resulting in Cyber-Physical Systems (CPS). Due to the rapid progress in hardware as well as software technology, the range of applications of CPS is ever increasing. Despite the potential of simulation technology, several distinct challenges arise when simulating CPS, which are mostly due to their heterogeneity. In this paper, we present an approach to modelling and simulation of CPS that embraces this heterogeneity by integrating component models realized in domain-specific simulation tools in a co-simulation framework using a flexible plugin system. The framework includes an active simulation database that is used for modelling and communication, and a DE scheduler to orchestrate the co-simulation scenario. The approach is applied to a modular spacecraft where computational hardware is emulated and computer networks are simulated. This results in a integrated simulation of both physical and information processing systems. Track Coordinator - Introductory Tutorials: Anastasia Anagnostou (Brunel University London), Canan Gunes Corlu (Boston University) Introductory Tutorials Tutorial: Metamodeling for Simulation Chair: Chris Kuhlman (University of Virginia) Russell Barton (Penn State) Abstract Abstract Metamodels are fast-to-compute mathematical models that are designed to mimic the input-output behavior of discrete-event or other complex simulation models. Linear regression metamodels have the longest history, but other model forms include Gaussian process regression and neural networks. This introductory tutorial highlights basic issues in choosing a metamodel type and specific form, and making simulation runs to fit the metamodel. The tutorial ends with advice on validation, and suggestions on further reading to expand your understanding of these methods. Introductory Tutorials How to Build Valid and Credible Simulation Models Chair: Edward Y. Hua (MITRE Corporation) Averill M. Law (Averill M. Law & Associates, Inc.) Abstract Abstract In this tutorial we present techniques for building valid and credible simulation models. Ideas to be discussed include the importance of a definitive problem formulation, discussions with subject-matter experts, interacting with the decision-maker on a regular basis, development of a written assumptions document, structured walk-through of the assumptions document, use of sensitivity analysis to determine important model factors, and comparison of model and system output data for an existing system (if any). Each idea will be illustrated by one or more real-world examples. We will also discuss the difficulty in using formal statistical techniques (e.g., confidence intervals) to validate simulation models. Introductory Tutorials Resource Modeling in Business Process Simulation Chair: Masoud Fakhimi (University of Surrey) Paolo Bocciarelli and Andrea D'Ambrogio (University of Rome Tor Vergata) and Gerd Wagner (Brandenburg University of Technology) Abstract Abstract Business Process (BP) models address the specification of the flow of events and activities, along with the dependencies of activities on resources. BP models are often analyzed by using simulation-based approaches. This paper focuses on resource modeling for BP modeling and simulation, by first introducing the most important concepts and discussing how resources are modeled in the standard BP modeling notation (i.e., BPMN) and in the area of Discrete Event Simulation. Then, the paper presents two newer BP modeling and simulation approaches, namely Object Event Modeling and Simulation (OEM&S) with the Discrete Event Process Modeling Notation (DPMN) and the JavaScript-based simulation framework OESjs, and Performability-enabled BPMN (PyBPMN) with the Java-based simulation framework eBPMN. A simple but effective case study dealing with a pizza service process is used to illustrate the main features of the presented approaches. Introductory Tutorials Computer Assisted Military Experimentations Chair: Anastasia Anagnostou (Brunel University London) Erdal CAYIRCI (Joint Warfare Training Center, Dataunitor AS) and Ramzan AlNaimi and Sara Salem AlNabet (Joint Warfare Training Center) Abstract Abstract Computer assisted military experimentation methodology and process are explained. The military processes that can benefit from computer assisted military experimentation are introduced and the best practices for each process are elaborated on. Finally, emerging new concepts and their potential impact on the military experimentation requirements are briefly discussed and the tutorial is concluded. During the tutorial, live demonstrations are made for geostrategic foresight development, defense planning, operational plan analysis, computer assisted military experimentation design and conducting a computer assisted military experiment. Introductory Tutorials Simheuristics: An Introductory Tutorial Chair: Canan Gunes Corlu (Boston University) Angel A. Juan (Universitat Politècnica de València); Yuda Li, Majsa Ammouriova, and Javier Panadero (Universitat Oberta de Catalunya); and Javier Faulin (Institute of Smart Cities) Abstract Abstract Both manufacturing and service industries are subject to uncertainty. Probability techniques and simulation methods allow us to model and analyze complex systems in which stochastic uncertainty is present. When the goal is to optimize the performance of these stochastic systems, simulation by itself is not enough and it needs to be hybridized with optimization methods. Since many real-life optimization problems in the aforementioned industries are NP-hard and large scale, metaheuristic optimization algorithms are required. The simheuristics concept refers to the hybridization of simulation methods and metaheuristic algorithms. This paper provides an introductory tutorial to the concept of simheuristics, showing how it has been successfully employed in solving stochastic optimization problems in many application fields, from production logistics and transportation to telecommunication and insurance. Current research trends in the area of simheuristics, such as their combination with fuzzy logic techniques and machine learning methods, are also discussed. Introductory Tutorials Simulation: The Critical Technology in Digital Twin Development Chair: Canan Gunes Corlu (Boston University) Bahar Biller, Xi Jiang, Jinxin Yi, and Paul Venditti (SAS Institute, Inc) and Stephan Biller (Purdue University) Abstract Abstract Digital twins are virtual representations of physical entities and processes. They aid in deriving insights to control entities and processes in the digital world and use those insights to drive actions in the physical world. Simulation is one of the key enabling technologies that lie at the heart of digital twin development, as it provides enhanced visibility into future performance and the ability to identify profit-optimal decisions. This tutorial describes how we envision digital twins developed in industry and the pivotal role simulation plays in their development. Using supply chain digital twins as an example application, we introduce our digital twin framework that simulation practitioners might find useful when developing their digital twin solutions to understand what did happen, predict what may happen, and determine solutions to fix future problems before they happen. We conclude with simulation research streams that contribute to the use of simulation in digital twin development. Introductory Tutorials Defining DEVS Models Using the Cadmium Toolkit Chair: Cristina Ruiz-Martín (Carleton University); Gabriel Wainer (Carleton University) Gabriel Wainer and Cristina Ruiz-Martin (Carleton University) Abstract Abstract Discrete Event System Specification (DEVS) is a mathematical formalism to model and simulate discrete-event dynamic systems. Using DEVS for modeling and simulation has numerous advantages, which include a rigorous formal definition of models, a well-defined mechanism for modular composition, and separation of concerns between the model definition and the simulation of the model, among others. In this tutorial, we will explain DEVS and present how to develop DEVS models using one of the multiple DEVS simulators: Cadmium. Cadmium is a DEVS simulator based on C++17. We will discuss the tool’s Application Programming Interface and we will present a model for the Rock-Paper-Scissors game as an example to explain how to define models in DEVS and implement them in Cadmium. Introductory Tutorials Digital Twin as an Aid for Decision-making in the Face of Uncertainty Chair: Andrea Ferrari (Politecnico di Torino) Vinay Kulkarni and Souvik Barat (Tata Consultancy Services Ltd), Tony Clark (Aston University Birmingham UK), and Balbir Barn (Middlesex University London) Abstract Abstract This introductory tutorial presents a pragmatic approach to decision-making in the context of uncertainty using digital twins. It provides a detailed understanding of why digital twins are an important technology for exploring the unique phenomena of complex socio-cyber-economic ecosystems. The tutorial presents a critical analysis of the current state of the art and practice of digital twins in decision making and includes a focus on our actor based language called Enterprise Simulation Language (ESL). This language is an example of an infrastructure for developing these digital twins for complex systems decision making. The tutorial helps you understand how ESL and related technologies can be used effectively by presenting multiple examples of real-world use of ESL to build digital twins. Problems addressed by these examples include: COVID-19 intervention planning, customer lifetime value optimization in the telecoms industry, supply chain, and return to work planning post COVID-19. Introductory Tutorials A Tutorial On Combining Flexsim With Python For Developing Discrete-Event Simheuristics Chair: Juan Fernando Galindo Jaramillo (UNICAMP, University of Campinas) Jonas Fuentes Leon (Spindox, Spindox Spain); Mohammad Peyman and Yuda Li (Universitat Oberta de Catalunya); Mohammad Dehghanimohammadabadi (Northeastern University); Laura Calvet (Universitat Oberta de Catalunya); Paolo Marone (Spindox); and Angel A. Juan (Universitat Politecnica de Valencia) Abstract Abstract Connecting commercial discrete-event simulation packages to external software or programming languages is essential to advance simulation modeling capabilities. For instance, this connectivity allows users to link the simulation environment to metaheuristic optimization algorithms or to machine learning methods. However, implementing these connections is not a trivial task, and may require API access and proper settings configurations. This tutorial provides a step-by-step guideline to connect the FlexSim commercial simulator with the popular Python programming language via sockets. Using this type of connection, a simheuristic algorithm coded in Python aims at optimizing the product allocation in a warehouse, which has been previously modeled in the aforementioned simulator. In addition, potential future applications of this software combination will be discussed to provide insights into future developments such as more advanced simheuristics or combinations of simulation with learnheuristics. Track Coordinator - Logistics, Supply Chains, Transportation: Dave Goldsman (Georgia Institute of Technology), Markus Rabe (TU Dortmund University, MB / ITPL), Lei Zhao (Tsinghua University) Agent-based Simulation, Logistics, Supply Chains, Transportation Organization of Transport Systems Chair: Michael Kuhl (Rochester Institute of Technology) Designing Mixed-Fleet of Electric and Autonomous Vehicles for Home Grocery Delivery Operation: An Agent-Based Modelling Study Dhanan Sarwo Utomo, Adam Gripton, and Philip Greening (Heriot-Watt University) Abstract Abstract This paper proposes a hypothetical agent-based model of home grocery delivery operation using electric and autonomous vehicles. In the last-mile delivery context, agent-based modelling studies that consider the use of autonomous vehicles is lacking. The model in this paper can produce a mixed-fleet design that can serve a set of synthetic orders punctually. Through extensive computer experimentation, firstly, we investigate how infrastructure setup affects the fleet design. Secondly, we highlight the benefits of mixed-fleet over homogeneous fleet design. And thirdly, we evaluate the benefits of using autonomous vehicles in last-mile delivery operation. Optimal Fleet Policy of Rental Vehicles with Relocation in New Zealand: Agent-based Simulation John-Carlo Favier, Subhamoy Ganguly, and Timofey Shalpegin (University of Auckland) Abstract Abstract An important strategic decision for rental car operators is whether to implement a single-fleet or multi-fleet model. The single-fleet model allows the movement of vehicles between regions, whereas the multi-fleet model does not. In practice, different rental car operators use different models. To address this problem, we have developed two simulation models and compared them in terms of fleet utilization, branch service level, relocations, and, ultimately, operating profit. We have taken the New Zealand rental car industry as an example as the country consists of two well-defined regions, and one-way southbound travel is a preferred option for many customers. The results indicate that a multi-fleet model has a higher service level at key centers and higher utilization. At the same time, the single-fleet model is relatively more profitable at the expense of a lower service level in key centers due to vehicles accumulating in the South Island. Development of a Simulation Framework for Urban Ropeway Systems and Analysis of the Planned Ropeway Network in Regensburg, Germany Simon Haimerl, Christoph Tschernitz, Tobias Schiller, Christoph Weig, Stefan Galka, and Ulrich Briem (Ostbayerische Technische Hochschule Regensburg) Abstract Abstract To evaluate the performance of a ropeway in an urban environment, simulations of the dynamic passenger transport characteristics are required. Therefore, a modular simulation model for urban ropeway networks was developed, which can be flexibly adapted to any city and passenger volume. This simulation model was used to analyze the ropeway network concept of the German city Regensburg and to determine the expected operating conditions. The passenger volume, different types of persons, their occurrence probability and their destination distribution is depending on the location and daytime and can be defined for each individual station. In an initial analysis, the number of passengers currently occurring in bus traffic were projected onto the ropeway network. To enable climate-friendly and efficient operation, different strategies were developed to significantly reduce the number of gondolas. The best fitting strategies resulted in significant cost savings while passenger comfort, as represented by queue time, remained unchanged. Logistics, Supply Chains, Transportation Intralogistics Chair: Xueping Li (University of Tennessee) Design and Control of Shuttle-based Storage and Retrieval Systems Using a Simulation Approach Best Contributed Theoretical Paper - Finalist Donghuang Li and Jeffrey S. Smith (Auburn University) and Yingde Li (Zhejiang University of Technology) Abstract Abstract During recent years, Shuttle-based Storage and Retrieval Systems (SBS/RSs) have been widely applied in distribution centers and production sites to meet the increasing demand for rapid and flexible large-scale warehousing activities. Recognizing the complex service dynamics due to the use of different types of S/R devices, both the configuration design problem and operational control problem need to be studied in order to improve efficient, sustainable and robust performance of the system. An animated, data-driven and data-generated simulation model is developed to support the development of both the design and configuration methodology and operational control strategy of SBS/RS-based warehouse systems. The model enables detailed analysis of different system configurations and technology options including tier-captive and tier-to-tier, multi-deep rack designs, multi-capacity lifts, etc., and provides visualized tracking of accurately simulated service processes of the S/R devices and performance evaluation under configurable demand scenarios. Dispatching Automated Guided Vehicles Considering Transport Load Transfers Patrick Boden, Sebastian Rank, and Thorsten Schmidt (TU Dresden / Chair of Material Handling and Logistics Engineering) Abstract Abstract The paper presents a dispatching algorithm for Automated Guided Vehicle systems that considers transport load transfers between vehicles to improve system performance. Transfers are primarily neglected so far to control Automated Guided Vehicle systems. Nevertheless, applications from other domains like courier services demonstrate an efficiency increase by considering transfers. The concept of transfers allows an exchange of transport items between vehicles during transport execution. Therefore, a transport job is divided into sub-transport jobs executed by different vehicles. With our algorithm, transfers are planned ad-hoc depending on the current system status in real-time. The objective is to improve system performance by decreasing vehicle utilization to yield higher throughput. A case study examines the algorithm using a material flow simulation study. As a key result, the simulation study revealed that vehicle utilization could be reduced up to 4~\% for the same throughput when transport load transfers are allowed. Deadlock Avoidance Dynamic Routing Algorithm for a Massive Bidirectional Automated Guided Vehicle System Kang Min Kim and Chang Hyun Chung (KAIST) and Young Jae JANG (KAIST, Daim Research) Abstract Abstract In a bidirectional automated guided vehicle (AGV) system, it is essential to allocate routes to AGVs to avoid deadlocks. However, avoiding deadlocks is more challenging in dynamic environments where AGVs are continuously assigned tasks and thus operate simultaneously. This paper proposes a deadlock prevention method for dynamic environments. The proposed method combines the active path reservation method, which restricts the directions of the paths of some moving AGVs, and the dynamic Dijkstra algorithm, which finds the shortest route according to the path reservation status. Apart from the Dijkstra algorithm, the proposed method is compatible with other routing algorithms, such as the Q-learning-based route algorithm; therefore, the proposed method enables the development of more efficient transport systems that account for AGV congestion. The efficiency and scalability of the proposed method were verified using Applied Materials AutoMod (version 14.0) simulation software. Logistics, Supply Chains, Transportation Machine Learning and Data Analysis Chair: Maylin Wartenberg (Hochschule Hannover) A New Data Farming Procedure Model for a Farming for Mining Method in Logistics Networks Joachim Hunker, Anne Antonia Scheidler, Markus Rabe, and Hendrik van der Valk (TU Dortmund University) Abstract Abstract A key factor in maintaining a logistics network in a competitive state is gaining and visualizing knowledge. The process of gaining knowledge from a given data basis is called knowledge discovery in databases. Besides gathering observational data, simulation can be used to generate a suitable data basis for the knowledge discovery known as data farming, which is typically implemented as a study. Conducting such a study requires a suitable procedure model, describing and structuring the tasks of the process. However, existing procedure models focus on defense applications, while considerably less work was put into transferring the approaches to logistics networks. Therefore, the authors developed a procedure model for conducting a data farming study in logistics networks. In this work, we systematically introduce the necessary background and discuss existing approaches in the literature. Furthermore, we present a software framework that is used to support the process in a practical application context. Application of Deep Reinforcement Learning for Planning of Vendor-Managed Inventory for Semiconductors Fazail Ahmad (Infineon Technologies AG), Santiago Nieto-Isaza (Universidad del Norte), and Marco Ratusny and Hans Ehm (Infineon Technologies AG) Abstract Abstract Previous research has shown that Deep Reinforcement Learning (DRL) can be applied to the planning of Vendor-Managed Inventory (VMI) for semiconductors. This study extends the research in this direction by overcoming the limitations of existing work and evaluating the approach through a case study at a semiconductor supplier. The results strengthen the potential of DRL for VMI planning. Moreover, the developed framework can be deployed with other advanced DRL algorithms. An Adaptive Large Neighborhood Search Algorithm for Wind Farm Inspection Using a Truck with a Drone Wenyu Tao, Xinjia Jiang, and Dongqiang Zhao (Nanjing University of Aeronautics and Astronautics) Abstract Abstract With the increasing demand for energy, wind power as a new energy source has been widely used and developed on a large scale. To extend the life of wind turbines, it is necessary but difficult to carry out regular inspections in wind farms located in remote areas. This paper studied the clustering and routing problem of truck-drone joint inspection of wind farms. An Adaptive Large Neighborhood Search (ALNS) algorithm is designed based on the characteristics of this problem. In addition, wind farm instances with different sizes and distributions are generated in this paper to simulate realistic scenes and evaluate ALNS. Finally, real wind farm instances are tested to demonstrate the inspection time in detail. Computational experiments show ALNS can improve significantly inspection time compared with another method. Logistics, Supply Chains, Transportation AI and Optimization Chair: Steffen Strassburger (Technische Universität Ilmenau) Towards Deadlock Handling with Machine Learning in a Simulation-based Learning Environment Marcel Müller (Otto von Guericke University Magdeburg); Lorena Silvana Reyes-Rubiano (Otto von Guericke University Magdeburg, Universidad de La Sabana); and Iegor Kutsenko, Tobias Reggelin, and Hartmut Zadek (Otto von Guericke University Magdeburg) Abstract Abstract The planning of complex logistic systems must ensure collision- and deadlock-free operation of the logistic system. Problem-specific rule-based algorithms used so far are inflexible with respect to infrastructure changes and scale poorly with systems that grow larger. This paper shows a first approach to handle logistic deadlocks with machine learning. We present a conceptual approach on how to handle logistic deadlocks with artificial neural networks. The paper also provides a technical implementation with a single agent approach based on reinforcement learning with deep Q-networks. A discrete event simulation of an automated guided vehicle system is used as the learning environment. The first results show that artificial neural networks can learn to handle deadlock capable logistic systems with low complexity. Solving Facility Location Problems for Disaster Response Using Simheuristics and Survival Analysis: A Hybrid Modeling Approach Bhakti Stephan Onggo (University of Southampton); Xabier Martin (Universitat Oberta de Catalunya); Canan Gunes Corlu (Boston University); Javier Panadero (Open University of Catalonia, Euncet Business School); and Angel A. Juan (Universitat Politècnica de València) Abstract Abstract One of the important decisions to mitigate the risk from a sudden onset disaster is to determine the optimal location of facilities (e.g., warehouses), because it affects the subsequent humanitarian operations. Therefore, researchers have proposed several methods to solve facility location problem (FLP) in disaster management. This paper considers a stochastic FLP where the goal is to minimize the expected time required to provide service to all affected regions and the travel time is stochastic due to uncertain road condition. The number of facilities to open is constrained by a certain maximum budget. To solve this stochastic optimization problem, we propose a hybrid simulation optimization model that combines a simheuristic algorithm with a survival analysis method to evaluate the probability of meeting the demand of all affected areas within a time target. The experiment using a benchmark set shows our model outperforms the deterministic solutions by about 8.9%. Analysis of Triggers of Port Congestions Using a Tree-Based Machine Learning Classifier and Explainable Artificial Intelligence Sugyeong Jo, Sung Uk Jung Jung, and Sang Jin Kweon (Ulsan National Institute of Science and Technology) Abstract Abstract Increased maritime traffic leads to port congestion and negatively affects a demurrage rate, which refers to the percentage of vessels in a port's queue for more than a fixed time to load/unload. The demurrage rate indicates the level of port congestion and must be as low as possible. We suggest a methodology to possible derive triggers regarding a demurrage rate based on real data. To this end, we collect annual vessel data arriving and exiting the port of Ulsan, Republic of Korea, and combine it with berth and weather data. We use a tree-based machine learning classifier algorithm and explainable AI to evaluate and analyze demurrage patterns at the port of Ulsan. We then propose policy recommendations to reduce port congestion. Our results show that demurrage highly depends on berth type, previous and next ports, day of the week, availability of tugs or pilots, and entering time at port. Logistics, Supply Chains, Transportation Distribution and Warehouse Optimization Chair: Bhakti Stephan Onggo (University of Southampton) Determining the Optimal Work-Break Schedule of Temporary Order Pickers in Warehouses Considering the Effects of Physical Fatigue Junsu Kim and Hosang Jung (Inha University) Abstract Abstract Motivated by a real-world order picking operation of an e-commerce retailer in South Korea, this paper studies a work-break scheduling problem of temporary order pickers in the warehouse, with a particular emphasis on the effects of physical fatigue. There are many temporary workers; furthermore, the proportion of elderly is increasing in e-commerce warehouses' order picking operations. Therefore, we first divide temporary order pickers into several groups based on their ages and experiences. We integrate fatigue-recovery equations to calculate each group's fatigue level under an industry's order picking schedule. We find that the current work-break schedule makes some groups work beyond their maximum fatigue level. A mixed integer linear programming model is formulated to determine the optimal work-break schedules satisfying the upper bound of the fatigue level. With our novel model, the order pickers' fatigue levels would be diminished if customized work-break schedules are implemented rather than providing an identical schedule. Decision-making Impacts of Originating Picking Waves Process for a Distribution Center Using Discrete-event Simulation Luiz Lang (AGR Consultants), Leonardo Chwif (Maua Institute of Technology), and Wilson Pereira (Simulate Simulation Technology) Abstract Abstract Discrete-Event Simulation is a powerful tool in modeling logistic systems, especially picking operations, usually the most costly activity in warehouses. However, is not a common practice to include human decisions in Discrete-Event Simulation projects. This paper reports a Discrete-Event Simulation model designed to evaluate picking waves strategies in a distribution center of the optical industry’s leader in Brazil. It was necessary to model 4 scenarios of picking waves generation process to evaluate the best picking wave strategy. In the best scenario, we achieved an average reduction of 10% in total operation time, along with an average reduction of 4% in the total picker’s walking distance. Order Release Strategies for a Collaborative Order Picking System Quang-Vinh Dang, Tugce Martagan, and Ivo Adan (Eindhoven University of Technology) and Jan Kleinlugtenbeld (Vanderlande Industries B.V.) Abstract Abstract A collaborative order picking system (COPS) enables human-robot collaboration by using order pickers for picking and autonomous mobile robots (AMRs) for transporting load carriers. Owing to the potential performance enhancement compared to a traditional manual order picking system, COPSs are gaining momentum in the retail warehousing sector. This paper proposes order release strategies based on priority and dispatching rules to achieve the best pick rate performance per AMR. A discrete event simulation model is developed to facilitate the evaluation of the proposed strategies. Their effectiveness is demonstrated with the use of real-world data from a case study warehouse. Our computational results show that a COPS using proposed strategies significantly improves the pick rate performance compared to the current practice. Logistics, Supply Chains, Transportation Manufacturing Optimization Chair: Hai Wang (SMU) Closing the Gap: A Digital Twin as a Mechanism to Improve Spare Parts Planning Performance Joan Stip and Lois Aerts (Eindhoven University of Technology, ASML) and Geert-Jan van Houtum (Eindhoven University of Technology) Abstract Abstract In order to meet service level agreements at minimal cost, Original Equipment Manufacturers (OEMs) use spare parts planning models to determine the optimal base stock levels at the warehouses in their service network. In practice, however, these optimized base stock levels result in a realized performance that deviates from the expected performance. Therefore, it is beneficial for these companies to evaluate the base stock levels in terms of service performance, inventory value, and costs. In order to measure this planning performance, we developed a digital twin that is able to measure the planning performance and identify root causes for the performance gap. Our digital twin helped ASML, an OEM in the semiconductor industry, to create a feedback loop between the spare parts planning model and its realized performance in practice, providing a mechanism to learn from past results and determine actions to close the gap between expected and realized performance. Development of DES Application for Factory Material Flow Simulation with Simpy So-Hyun Nam, Seung-Heon Oh, Hee-Chang Yoon, Young-In Cho, Ki-Young Cho, Dong-Hoon Kwak, and Jong Hun Woo (Seoul National University) Abstract Abstract Since most studies on logistics simulation have used commercial software, there has always been a limit in terms of customization and performance. In this research, DES (discrete event simulation), a program that is capable of evaluating logistics for shipyard layout changes, was developed. DES was developed using Python-based open source, and substitutes yard layouts with network models in order to model the complex factory layouts and road configurations of shipyards. An application that is capable of not only performing high-speed calculations on the frequency of road use, travel distance of transporters, travel distance of blocks, and workloads in stockyards and factories through simulation was developed, and given the ability to analyze productivity changes according to various layout configurations, production plans, product configurations and resource conditions. Simulation of Industrial Systems for Next-generation Aircraft Manufacturing Arnd Schirrmann (Airbus) Abstract Abstract Co-design is an important prerequisite for finding the best design solution for both future aircraft and the corresponding industrial systems. This paper discusses the use of simulation in the process of co-defining the optimal industrial system configuration for a future aircraft with a case study determining the logistic system for the aircraft fuselage manufacturing. The scope of the paper includes the industrial performance, the logistics costs and the environmental footprint of the logistic system. To study the large design space, the generation and the execution of the simulation has to be automated in a scalable cloud infrastructure. The simulation is embedded into a complex modeling and analysis environment for defining the system parameters and constraints, the automated (logistics system) scenario generation, the simulation based key performance indicator calculation and the results visualization and comparison. Logistics, Supply Chains, Transportation Transportation and Logistics Scheduling Chair: Klaus Altendorfer (Upper Austrian University of Applied Science) A Simulation-heuristic Approach to Optimally Design Drone Delivery Systems in Rural Areas Xudong Wang, Kimon Swanson, Zeyu Liu, Gerald Jones, and Xueping Li (University of Tennessee, Knoxville) Abstract Abstract In recent years, drone delivery has become one of the most widely adopted emerging technologies. Under the current Covid-19 pandemic, drones greatly improve logistics, especially in rural areas, where inefficient road networks and long distances between customers reduce the delivery capacity of conventional ground vehicles. Considering the limited flight range of drones, charging stations play essential roles in the rural delivery system. In this study, we utilize simulation to optimize the drone delivery system design, in order to minimize the cost of serving the maximum capacity of customers. As facility siting is usually difficult to optimize, we propose a novel simulation-heuristic framework that continuously improves the objective to find near-optimal solutions. In addition, we conduct a case study using real-world data collected from Knox County, Tennessee. The results suggest that the proposed approach saves over 15\% on total costs compared with the benchmark. Effect of Real-Time Truck Arrival Information on the Resilience of Slot Management Systems Ratnaji Vanga and Yousef Maknoon (Technische Universiteit Delft), Sarah Gelper (Eindhoven University of Technology), and Lorant A. Tavasszy (Technische Universiteit Delft) Abstract Abstract Traffic congestion is uncertain and undesirable in logistics and leads to arrival uncertainty at downstream locations engendering disruptions. This paper considers a loading facility that uses Truck Appointment System (TAS) for slot management and faces incoming truck arrival uncertainty due to traffic congestion. Due to the recent advancements in cyber-physical systems, we propose an adaptive system that uses the real-time truck Estimated Time of Arrival (ETA) data to make informed decisions. We develop an integer mathematical model to represent the adaptive behavior that determines the optimal reschedules by minimizing the average truck waiting time. We developed a simulation model of the adaptive system and reported the estimated benefits from our initial experiments. Applying Simulation to Estimate Waiting Times and Optimize the Booking Size for Oversea Transportation Vessels Matthias Winter and Klaus Altendorfer (University of Applied Sciences Upper Austria) and Stefan Pickl (University of the Federal Armed Forces Munich) Abstract Abstract The aim of this research is to determine the booking size for vessels in oversea delivery to minimize transportation costs. In the studied setting, a producer must book transportation space in advance, whereby the arrival processes for containers and vessels are stochastic. Analytical approaches of queueing theory are inconvenient in this case, and a discrete event simulation is therefore used to estimating the objective function. Moreover, the booking size is optimized for a static and a simple dynamic booking policy using a discretization of the solution space. The results show that the higher the variance in the shipping cycle, the higher is the optimal booking size and the total transportation costs. The dynamic booking policy significantly outperforms the static policy and indicates potential for future research. Logistics, Supply Chains, Transportation Food and Health Chair: Christos Alexopoulos (Georgia Institute of Technology) From Efficiency to Fairness: Design of Allocation Rules for Food Bank Operations Jinpeng Liang (Dalian Maritime University) and Guodong Lyu (Hong Kong University of Science and Technology) Abstract Abstract Food banks play an essential role in alleviating world hunger by allocating surplus food to eligible agencies or individuals. As non-profit organizations, food banks target agencies to achieve operational efficiency of food allocation (i.e., reduce food waste). However, this would result in inequitable service delivered to different agencies. In this work, we design real-time food allocation rules to serve the sequentially revealed demand of each targeted agency, and to ensure that adequate food is allocated to each agency (fairness) as much as possible. We measure allocation fairness by fill rate (i.e., the ratio of the allocated amount to revealed demand) and exploit online convex optimization tools to characterize the attainable fill rate of the agency. We use these insights to develop provably near-optimal allocation rules for food bank operations, and leverage on extensive numerical simulations to discuss the promising benefits of our allocation rules over the existing benchmark. Simulation of IT Data Integration to Optimize an Antibiotics Supply Chain with System Dynamics Ines Julia Khadri (Uppsala University) and Joe Viana (BI Norwegian Business School) Abstract Abstract Supply chain (SC) optimization is essential for a firm to cope with everchanging market conditions and disruptions. New technologies have allowed for more advanced supply chain optimization. This paper uses system dynamics (SD) simulation to model the effects of data integration technologies on an antibiotic (AB) SC operation. The study aims to improve the AB SC to benefit all relevant stakeholders including the patient population. We evaluate how IT integration technologies can improve communication across the SC to mitigate or reduce the impact of the of disruptions on AB users. The presented model is under development and is subject to structural and parametric changes as discussions continue with stakeholders about the system structure and what data can be used and disclosed. Despite extensive SC optimization literature there has been a growing call of an evidence base to support decision making relating to national medicine policies. The Impact of Batch Dispatching on Vaccine Manufacturing Throughput Robin Kelchtermans, Donovan Guttieres, Carla Van Riet, Catherine Decouttere, and Nico Vandaele (KU Leuven) Abstract Abstract The COVID-19 pandemic has highlighted the challenge of rapid, large-scale vaccine production following an unanticipated spike in demand. Increasing the total throughput of a vaccine manufacturing network is crucial. However, the production process is complex and has many distinct steps, each with high stochasticity. Discrete event simulation is used to simulate this manufacturing network, with a focus on testing the impact of dispatching rules used to allocate batches between drug substance and drug product sites. We propose a new dispatching rule based on a push-pull mode, often observed in practice. This increases the utilization of the network and keeps the batches at the drug substance site for a longer period, which improves flexibility in the allocation. Logistics, Supply Chains, Transportation Supply Chain Applications Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities) Modeling and Simulation of Food Bank Disaster Relief Operations Monica Kothamasu, Eduardo Perez, and Francis A. Mendez Mediavilla (Texas State University) Abstract Abstract Food banks obtain in-kind donations (i.e., supplies) from individual donors, and from public and private organizations. The amount of supplies required to support distributing agencies is very dynamic, especially after natural disasters when food banks become major players in the disaster relief efforts. Therefore, planning for the operation of food banks under both, normal and disaster relief conditions, is a challenging problem. In this paper, a discrete-event simulation model is developed to represent the operations of a network of food banks at the supply chain level. The model is used to investigate the impact of multiple disaster relief operational policies (i.e., supply prepositioning, distribution center assignment) in the distribution of supplies and in meeting the demand. The results show that there is a 48% increase in the overall demand fulfillment rate if food banks operate as an integrated network with supply prepositioning and demand splitting between operating facilities. Simulation-Based Order Management for the Animal Feed Industry Daniel Rippel and Michael Lütjen (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) and Michael Freitag (University of Bremen) Abstract Abstract Animal feed production constitutes a significant market in today's agricultural sector, with an annual turnover of 55 billion euros within the European Union in 2020. Nevertheless, feed logistics stills suffer from low digitization and manually coordinated supply chains. These factors lead to high transportation and product costs for customers and retailers by inducing short-term orders that often disregard current price developments. This article presents a simulation model for feed supply networks consisting of a number of customers, retailers, and manufacturers. It proposes a fuzzy-based decision strategy for customers to decide when to order specific products. Moreover, it describes a possible decision strategy for retailers to optimize their transport routes by selecting viable manufacturers. The evaluation shows that the proposed decision strategy can reduce costs for feed and, depending on the supply network structure, reduce delivery distances for feed retailers. An Agent-based Simulation Model to Mitigate the Bullwhip Effect via Information Sharing and Risk Pooling Md Zahidul Islam, Nettie Roozeboom, Payton Gunderson, Xueping Li, and Andrew Junfang Yu (University of Tennessee - Knoxville) Abstract Abstract The bullwhip effect, a phenomenon of progressively larger distortion of demands across a supply chain, can cause chaos and disorder with amplified supply and demand misalignment. In this research, we investigate ways to decrease the bullwhip effect via risk pooling and information sharing through a simulation study. An Agent-based simulation model was developed to evaluate how risk pooling and information sharing between distinct entities in a supply chain can reduce the bullwhip effects. Specifically, we are interested in the effectiveness of these two strategies through their interplay when they are applied simultaneously and separately. We simulate a three-echelon supply chain by considering one manufacturer, one wholesaler, and two retailers. Four scenarios are evaluated by varying the information sharing strategy (centralized and decentralized), and with and without a risk pooling policy. The results show that when both strategies are adopted, the supply chain faces less order amplification throughout the supply chain. Logistics, Supply Chains, Transportation Urban and Local Transport Chair: Marvin Auf der Landwehr (Hochschule Hannover) A Simulation-Optimization Model for Automated Parcel Lockers Network Design in Urban Scenarios in Pamplona (Spain), Zakopane, and Krakow (Poland) Bartosz Sawik (Public University of Navarre; AGH University, Krakow, Poland); Adrian Serrano-Hernandez (Public University of Navarre, Institute of Smart Cities); Aitor Ballano (Public University of Navarre); and Javier Faulin (Public University of Navarre, Institute of Smart Cities) Abstract Abstract The constantly rising of e-commerce coupled to the extremely fast deliveries is a significant contributor to saturate city centers mobility. To this respect, the development of a convenient Automated Parcel Lockers (APLs) network improves the last-mile distribution by reducing the transportation vehicles, the distances driven, and the delivery stops. Thus, this article aims to define and compare APL networks in the cities of Pamplona (Spain), Zakopane and Krakow (Poland). Thereby, a bi-criteria weighted-sum simulation optimization model is developed for a representative year in the aforementioned cities. The simulation forecasts the e-commerce demands whereas the optimal APL network is obtained with a bi-criteria maximum APL revenues and minimum network costs. Meaningful results are obtained from the multi-criteria hybrid model outcomes as well as from the cities comparison. These outcomes propose efficient APLs networks considering cultural and demographic factors for a massive use of APLs in high-demand periods. Combining Survival Analysis and Simheuristics to Predict the Risk of Delays in Urban Ridesharing Operations with Random Travel Times Rocio de la Torre (Universitat Politècnica de València), Javier Panadero (Open University of Catalonia), Elena Perez-Bernabeu (Universitat Politècnica de València), Patricia Carracedo (Universitat), Erika M. Herrera (Universitat Oberta de Catalunya), and Angel A. Juan (Universitat Politècnica de València) Abstract Abstract More sustainable transportation and mobility concepts, such as ridesharing, are gaining momentum in modern smart cities. In many real-life scenarios, travel times among potential customers’ locations should be modeled as random variables. This uncertainty makes it difficult to design efficient ridesharing schedules and routing plans, since the risk of possible delays has to be considered as well. In this paper, we model ridesharing as a stochastic team orienteering problem in which the trade-off between maximizing the expected reward and the risk of incurring time delays is analyzed. In order to do so, we propose a simulation-optimization approach that combines a simheuristic algorithm with survival analysis techniques. The aforementioned methodology allows us to generate not only the probability that a given routing plan will suffer a delay, but also gives us the probability that the routing plan experiences delays of different sizes. Simulation Platform for Testing and Evaluation of CAV Trajectory Optimization and Signal Control Algorithm Integrated with Commercial Traffic Simulator Luan Carvalho, Agustín Guerra, Xiaohan Wang, Pruthvi Manjunatha, and Lily Elefteriadou (University of Florida) Abstract Abstract Connected and Autonomous Vehicles (CAVs) and their applications have the potential to increase the safety and efficiency of future traffic systems. Therefore, it is essential to test and evaluate these systems to ensure gains in mobility and safety before they are widely used. However, there are limitations in testing these systems in the real world due to the limited number of CAVs and safety concerns. This paper develops a simulation platform which uses VISSIM to test a previously developed CAV application for signalized intersections. The simulation platform delegates the control of conventional vehicles to VISSIM and leverages its performance evaluation capabilities. Simultaneously, the external CAV application jointly optimizes signal control and CAV trajectories, creating a mixed traffic flow environment. We demonstrate how our simulation platform can integrate other CAV applications to VISSIM, allowing us to compare different trajectory and/or signal control optimization algorithms using VISSIM as a benchmark. Track Coordinator - Manufacturing Applications: Christoph Laroque (University of Applied Sciences Zwickau), Guodong Shao (National Institute of Standards and Technology) Manufacturing Applications Digital Twins in Manufacturing Chair: Guodong Shao (National Institute of Standards and Technology) A Laboratory Demonstrator of Simulation-based Digital Twins for Smart Manufacturing Giovanni Lugaresi, Francesco Verucchi, Edoardo Passarin, Sofia Gangemi, Giulia Gazzoni, and Andrea Matta (Politecnico di Milano) Abstract Abstract Under the fourth industrial revolution, several technologies have emerged and contributed to a substantial advancement of automation and data exchange in the manufacturing field. The digitization of processes within this demanding context has sparked an interest towards the research on Digital Twin (DT). Within a production planning and control scope, the digital counterpart of a manufacturing system can be a Discrete Event Simulation (DES) model, given the capabilities of this technology in analysing complex and dynamic systems. A DT has to be able to represent in real-time the complete behaviour of the physical system and to adapt in case of disruptions and changes in the shop-floor configuration. This work presents a case study that serves as proof-of-concept of the operational phase of a DT. A lab-scale manufacturing system is used as physical twin and two main components of the DT software architecture are developed, (1) synchronization and (2) validation. Applying a Hybrid Model to Solve the Job-shop Scheduling Problem with Preventive Maintenance, Sequence-Dependent Setup Times And Unknown Processing Times Joep J. Ooms and Alexander Hübl (University of Groningen) Abstract Abstract Although, many researchers propose to optimize the job-shop scheduling problem using all processing times initially available, to mimic a more real-life environment, in this paper, processing times are unknown at the beginning of the optimization. The job-shop scheduling problem is considered with sequence-dependent setup times and preventive maintenance constraints. Processing times are revealed when products arrive at a machine. Unknown processing times will give a more real-world representation, where exact processing times aren’t available. A hybrid model, combining discrete event simulation and optimization is applied to simulate the production process and to solve the job-shop problem. The hybrid model uses optimization by creating new production schedules when a job is processed, and when the product arrives at the next machine. The meta-heuristics of the genetic algorithm, ant colony optimization, and simulated annealing algorithm are used. The results showed significantly better results for the hybrid optimization than random sequencing of jobs. Achieving Sustainable Manufacturing by Embedding Sustainability KPIs in Digital Twins Clarissa Alejandra Gonzalez Chavez, Maja Bärring, Marcus Frantzén, Arpita Annepavar, Danush Gopalakrishnan, and Björn Johansson (Chalmers University of Technology) Abstract Abstract The manufacturing industry requires highly flexible and dynamic production lines that shift from conventional mass production to cover the requirements and fulfill demands. Customized production may reduce production waste but has not been studied to a wide extent. The advancement of digital technologies, e.g., Digital Twins, enable factories to collect real-time data. Also, they can enable remote monitoring of the production processes by establishing bi-directional flows of data between the physical and virtual spaces. This study draws its sight to the potential of digital manufacturing to improve sustainability in production systems by making use of Digital Twins. This research work performs a literature review and identifies suitable KPIs for a DES model and evaluates the impact in a drone factory in four scenarios that test final assembly processes. The findings of this work can pose a first step toward the future development of a digital twin. Manufacturing Applications Machine Learning Chair: Giovanni Lugaresi (Politecnico di Milano) Application of Simulation based Reinforcement Learning for Optimizing Lot Dispatching Rules of Semiconductor Fab Chihyun Jung, Younghwan Kim, Hyeyun Kang, and You-In Choung (SK Hynix) Abstract Abstract Automatic lot dispatching system performs critical role in semiconductor FAB. It determines the order of jobs to process for individual equipment based on predefined rules. The dispatching rule should be aligned with the global objectives such as WIP move target, minimizing cycle time, maximizing total throughput, and etc. In this study, we examine a reinforcement learning (RL) algorithm to optimize dispatching rules, which should be aligned with the global objectives of FAB. A discrete event simulation model is developed as a learning environment for RL. The proposed methodology has been examined in real FAB. Promising results are presented and the limitation of the approach is discussed. Discrete-Event Simulation and Machine Learning for Prototype Composites Manufacture Lead Time Predictions Jamie Karl Smith and Calum Dickinson (AMRC, University of Sheffield) Abstract Abstract The article looks to generate synthetic data for machine learning algorithms using discrete-event simulation (DES). The case study used for the DES model was the Composite Centre at the AMRC, where prototype composites products are manufactured. The machine learning algorithm was used to predict the lead times of composite products based on the current state of the system. The machine learning algorithm is able to calculate the lead times much faster than a simulation model and does not require the expertise of a simulation engineer to execute. Three different types of composites materials and their manufacturing process were initially modelled: dry fiber, prepreg and thermoplastic. The accuracies of three machine learning algorithms were compared. The algorithms chosen were: Artificial Neural Network (ANN), Recurrent Neural Network (RNN) and linear regression. It was found that the RNN provided the most accurate predictions and the linear regression algorithm was the worst performing algorithm. Using Data Farming and Machine Learning to Reduce Response Time for the User Falk Stefan Pappert and Oliver Rose (Universität der Bundeswehr München) Abstract Abstract Simulation in our area usually takes some time; even if a preexisting model just needs to be parameterized there is still the run time, which will usually take at least a few minutes if not hours. In our current case, a planner wanted to know for a given product mix situation and for an equipment group with specific characteristics how much he can utilize the equipment without violating flow factor targets. Since the user is usually asking the same question just with different parameters we are able to solve the waiting time problem while still giving good decision support. Instead of simulating every scenario at the time the user actually needs these answers, we use data farming to generate a large set of data points that are then used to train a neural network. This neural network substitutes for the simulation and responds to the user immediately. Manufacturing Applications Scheduling and Sequencing Chair: Thomas Felberbauer (St. Pölten University of Applied Sciences) Multi-Agent System Model For Dynamic Scheduling In Flexible Job Shops Subject To Random Machine Breakdown Akposeiyifa Joseph Ebufegha and Simon Li (University of Calgary) Abstract Abstract This paper presents a model for dynamic scheduling in a smart manufacturing system that can be used in a manufacturing environment subject to random machine breakdown. We employ a multi-agent system (MAS) to schedule work on a system of machines in real-time. We propose that such a system should be less sensitive to unforeseen disruptions to the system whilst yielding good results with respect to the total flowtime for parts requested of the system. The approach employed is a completely reactive approach, and as such has the benefit of not requiring the determination of a nominal schedule. Rather, we take advantage of self-organizing nature of the MAS to guide work scheduling. To evaluate the efficacy of our proposed model, we compare its performance to that of a system using predictive-reactive scheduling to solve a furniture manufacturing problem. Real-time Scheduling Based on Simulation and Deep Reinforcement Learning with Featured Action Space Shufang Xie, Tao Zhang, and Oliver Rose (Universität der Bundeswehr München) Abstract Abstract In this study, real-time scheduling is narrowed to the selection of one job to be processed from the queue of a machine when the machine becomes idle. It is considered to be one kind of sequential decision-making. Deep reinforcement learning with simulation has been widely used to make such decisions for most environments where the action space is either continuous or discrete but limited in size. However, in the real-time scheduling environment, the number of actions is the number of jobs in the queue which are varying over time. Moreover, if jobs arrive randomly, it is impossible to fix the actions. The action space is dynamic and stochastic. To overcome the difficulties raised by this, the action space is transformed into a featured action space. Actions are distinguished by their features. To apply the featured action space, three innovative structures of neural networks are proposed and compared with each other. Sequence Scrambling in Aggregated Mixed-Model Production Line Modeling Sebastian Kroeger, Svenja Korder, Robin Schneider, and Michael F. Zaeh (Technical University of Munich, Institute for Machine Tools and Industrial Management) Abstract Abstract Global competition and volatile customer demands for individual products lead to shortened product life and innovation cycles and rising numbers of product variants. By manufacturing on mixed-model lines (MML), companies can meet these challenges and produce many variants on one line. To run MML, more frequent planning is required. During early-stage planning, production system constraints, such as buffers and internal supply relationships, must be dimensioned. Therefore, methods like discrete-event simulation can be applied. However, line-modeling parameters are unknown in early-stage planning. Consequently, detailed simulation models cannot be built. For this reason, aggregated modeling approaches are applied. But existing aggregated modeling approaches do not consider MML. Therefore, this paper advances the existing modeling approaches to consider MML and integrates inline variant scrambling effects. This enables the dimensioning of decoupling buffers, as they are necessary to rebuild production sequences. The proposed approach was implemented and applied using an exemplary case study. Manufacturing Applications Scheduling Chair: Christoph Laroque (University of Applied Sciences Zwickau) A Framework For Rescheduling a Fixed-Layout Assembly System Using Discrete-Event Simulation Harold Billiet and Rainer Stark (Technische Universität Berlin) Abstract Abstract This paper presents a framework for the rescheduling of Fixed-Layout Assembly (FLA) systems. It relies on the automatic generation of a Discrete-Event Simulation (DES) model. FLA systems are used for the assembly of large and bulky products in small volume. Because of their characteristics, they are subject to lots of disturbances and plan deviations throughout the execution stage. Planners currently use their experience to forecast the impact of these changes on the whole assembly system and to imagine new scheduling scenarios. Conventional DES methods are adapted to inflexible and automated manufacturing systems and are often not used for FLA systems. Planners would strongly benefit from a simple and effective solution to quickly forecast the impact of errors and new scheduling scenarios on the whole assembly system. The framework presented in this paper addresses this problem. Multi-Disciplinary Simulation-based Digital Twins for Manufacturing Systems Emile Glorieux (The Manufacturing Technology Centre,) and Joe Young (The Manufacturing Technology Centre) Abstract Abstract In recent years, digital twin solutions have been introduced into manufacturing using various engineering software, informatics and sensing technologies. These allow the capture and transfer of a near-continuous stream of data from the physical production systems and processes to virtual environments, where modelling and simulation can predict future system behaviour. These predictions are then used for decision making and control. Most commonly, digital twin solutions use data-driven models to predict the future behaviour of the system or process based on the sensor data. This presentation proposes how “first principles simulation” can be used for digital twins in manufacturing. Simulation-based digital twins remove the need for onerous historical datasets that are required to train data-driven models. Different techniques for using multi-disciplinary simulations with these digital twins are proposed. Three simulation-based digital twins are presented, demonstrating three commercial case studies, related to pharmaceutics, fast-moving-consumer-goods, and aerospace (additive) manufacturing. Optimal Team Formation and Job Assignment to Optimize Warehouse Operations Avnish Kishor Malde and Tugce Isik (Clemson University) Abstract Abstract We study optimal container unloading and warehouse replenishment at a manufacturing plant. The warehouse inventory is depleted as parts are consumed in the manufacturing process. The inventory level is replenished using the parts stored in sea-containers and trailers available at the plant. Our goal is to determine the optimal team formation, buffer allocation, and job assignment for teams of workers. The teams work at two tandem stations, each with several finite-buffered parallel queues. The objective is to minimize the total time required to process all containers and trailers. There are operational constraints on the number of teams that can be formed, the number of servers in each team, and the number of buffer spaces allotted to each team. We use a simulation-based optimization approach, and the results consistently show a reduction in the processing time. Further, we also observe that the optimal team formation is dependent on the total workload. Manufacturing Applications Optimization Chair: Marina Meireles Pereira Mafia (University of Southern Denmark ) Optimization Of The Design Of Modular Production Systems Soeren Bergmann (TU Ilmenau) Abstract Abstract The desire for more flexibility in manufacturing systems, especially when different products or many product variants are manufactured in one production system is leading to a move away from the manufacturing principle of classic line production to more flexible and workshop-oriented production systems, particularly in the automotive industry. One of the challenges in these so-called modular assembly or production systems is the system design, especially the allocation of activities to the individual production cells. One approach to improve this allocation is offered by simulation-based optimization. In this paper, a concept for simulation-based optimization of the design of modular production systems is presented and demonstrated by means of a small academic case study. Classical genetic algorithms and additionally the NSGA-II algorithm, which also allows multi-objective optimization, are used. Enabling Knowledge Discovery from Simulation-based Multi-objective Optimization in Reconfigurable Manufacturing Systems Carlos Alberto Barrera Diaz, Henrik Smedberg, Sunith Bandaru, and Amos H.C. Ng (University of Skövde) Abstract Abstract Due to the nature of today's manufacturing industry, where enterprises are subjected to frequent changes and volatile markets, reconfigurable manufacturing systems (RMS) are crucial when addressing ramp-up and ramp-down scenarios derived from, among other challenges, increasingly shortened product lifecycles. Applying simulation-based optimization techniques to their designs under different production volume scenarios has become valuable when RMS becomes more complex. Apart from proposing the optimal solutions subject to various production volume changes, decision-makers can extract propositional knowledge to better understand the RMS design and support their decision-making through a knowledge discovery method by combining simulation-based optimization and data mining techniques. In particular, this study applies a novel flexible pattern mining algorithm to conduct post-optimality analysis on multi-dimensional, multi-objective optimization datasets from an industrial-inspired application to discover the rules regarding how the tasks are assigned to the workstations constitute reasonable solutions for scalable RMS. Workload Control in High-Mix-Low-Volume Factories Through the Use of a Multi-Agent System Jeroen Didden, Quang-Vinh Dang, and Ivo Adan (Eindhoven University of Technology) Abstract Abstract Order release in High-Mix-Low-Volume machine environments is often difficult to control due to the high variety of these shops. This paper, therefore, proposes an extension to a Multi-Agent System to control order release. Intelligence is introduced to the agent that is responsible for order release to autonomously learn which jobs to release into the shop through the use of sequencing rules, depending on the current environment. The objective is to minimize the mean weighted tardiness of all jobs. Computational results show that the proposed sequencing rules outperform other more common dispatching rules in terms of mean weighted tardiness. Further analysis of the results also reveals that a more accurate prediction of the lead time of jobs can be made, which is one of the main interests of practitioners in High-Mix-Low-Volume environments. Manufacturing Applications Simulation Approaches Chair: Klaus Altendorfer (Upper Austrian University of Applied Science) A Tool-based Approach to Assess Simulation Worthiness and Specify Sponsor Needs for SMEs Ana Luiza Bicalho-Hoch, Felix Özkul, Nicolas Wittine, and Sigrid Wenzel (University of Kassel) Abstract Abstract Many small and medium-sized enterprises (SMEs) still refrain from using discrete-event simulation (DES) to plan, implement and operate their manufacturing systems. Given the increasing relevance of DES - e.g., as the basis for digital twins - a need for action is therefore identified. While the reasons for the seeming aversion to DES in the context of SMEs are well-researched, the following work’s main objective is to present a tool-based approach that aims at supporting SMEs overcome the hurdles of the early stages within a structured simulation study. This is done through the assistance of users in identifying issues that are simulation-worthy as well as approaching the possible tendering and development of accurate problem specifications. Furthermore, the research methodology and the results of the commenced Delphi study are outlined. A critical reflection and an outlook on the topic are provided as well. Carbon Policies in Network Distribution: A Simulation Approach for Sustainable Supply Chains Marina Meireles Pereira Mafia, Elias Ribeiro da Silva, and Arne Bilberg (University of Southern Denmark) Abstract Abstract New carbon policies are being introduced by many countries due to stricter sustainability targets. However, the existing research lacks investigation on how it influences the supply chain network design. In this paper, we investigate how different network policies would influence the supply chain carbon footprint and costs while analyzing how different strategies to minimize the total emissions potentially influence the network distribution operation. A simulation approach is used to investigate the impact of carbon policies in a retail company with omnichannel operations, and different strategies for carbon minimization along the supply chain are discussed. This research is expected to be useful to support companies with a new approach for a more data-driven decision-making process toward sustainable distribution networks. Automatic Component-based Synthesis of User-Configured Manufacturing Simulation Models Fadil Kallat (Technische Universität Dortmund); Jens Hetzler (ITK Engineering GmbH); Alexander Mages and Carina Mieth (TRUMPF Werkzeugmaschinen SE & Co. KG); and Jakob Rehof, Christian Riest, and Tristan Schäfer (Technische Universität Dortmund) Abstract Abstract Using simulation models for manufacturing facilities is a common approach for planning, optimizing, and testing different machine configurations and positioning before the actual construction. However, creating these models is time-consuming and costly. Consequently, only a few different simulation models are usually created based on best practices and experience, precluding any examination of the entire variety of possible solutions. To address these obstacles, we present a proof of concept to automate and hence reduce the cost of the process of simulation model creation, thereby allowing for the creation of a larger number of selectable solution variants. Based on a given master simulation model, which obtained all possible variations of a shop floor, we defined simulation building blocks as components. We used component-based synthesis using combinatory logic to synthesize a product line of varying simulation models for a given configuration to be executed and evaluated to find suitable solutions. Manufacturing Applications Simulation Modeling Chair: Sumin Jeon (Siemens) Development of a data-driven Simulation Model for an Assembly-To-Order System Chin Soon Chong, Chin Sheng Tan, Peng Yu Tan, and Guan Leong Tnay (A*Star, SIMTech) and Yang Kuei Lin (Feng Chia University) Abstract Abstract This paper presents the development of a simulation model for an Assembly-To-Order (ATO) system. The model can be used to identify potential conflicts or bottlenecks in the system, to assist in decision making process, and to improve on the existing manual orders planning process. The process flow in the ATO system is complex as the system needs to handle a variety of products, different process routings, high product mix and shared resources. The simulation model is developed using an existing commercial software platform. The model is designed to be fully data-driven, with sets of input and output data schema tables. This approach will make the simulation modeling more accessible to manufacturing community, and enable integration to factory’s Manufacturing Execution System (MES) for real-time data and Real Time Dashboard (RTD) for displaying multiple scenarios of simulation results. The integration is accomplished through a central database server via Application Programming Interface (API). Potential of Simulation Effort Reduction by Intelligent Simulation Budget Management for Multi-item and Multi-stage Production Systems Wolfgang Seiringer (University of Applied Sciences Upper Austria), Juliana Castaneda (Universitat Oberta de Catalunya -- IN3), Klaus Altendorfer (University of Applied Sciences Upper Austria), and Lisardo Gayán Tremps and Angel A. Juan (Universitat Oberta de Catalunya -- IN3) Abstract Abstract Simulation is utilized to analyze multi-item and multi-stage production systems that are driven by material requirements planning (MRP). The range of possible parameter combinations and solutions increases as the model of the production system includes more components. In simulation experiments with a fixed number of replications per iteration, it might be unfeasible to test a large range of these MRP parameters. This paper discusses how to efficiently manage the simulation budget, to avoid wasting it in exploring non-promising solutions. Therefore, intelligent simulation budget management (SBM) is applied, where the results of the current iteration's replications are compared to the average results of the previous iterations. The current iteration is skipped if its average overall cost is outside the defined percentile boundaries. A simulation study using two multi-item and multi-stage MRP systems is performed to evaluate SBM potential. The results show that the SBM methodology leads to significantly reduced simulation efforts. Improvement of the Kanban System for an Automotive Company via Discrete Event Simulation: Otokar Okay Isik (T.C. Istanbul Kultur University, Otokar) Abstract Abstract This study aims to improve the available kanban system of an automotive company. Currently, a 2-bin kanban system is being used for nearly 700 frequently used supplies in the assembly of different types of vehicles. However, there are no well-defined procedures for sizing the kanbans or the warehouse stocks. Likewise, there is no policy for how often the availability of parts in kanban bins on the assembly line must be checked. Therefore, opportunities exist to reduce the inventory costs due to overstocking and to improve labor efficiency by correctly scheduling the mizusumashi. The current flow for 3 supply parts with different usage frequency, lead time, and critical and maximum stock levels has been modeled via discrete-event simulation and validated. Then, variable values that minimize the WIP, without stockouts were sought after. Arena and OptQuest were used in the modeling and optimization. Manufacturing Applications Simulation Applications Chair: Deogratias Kibira (National Institute of Standards and Technology, University of Maryland) Production Scheduling for Parallel Machines using Simulation Techniques: Case Study of Plastic Packaging Factory Jiratsaya Panasri, Nara Samattapapong, and Sathitthep Sangthong (Suranaree University of Technology) Abstract Abstract The purpose of this study is to reduce total uptime (Makespan). After examining and collecting data on the planning and sequencing operations of the plastic packaging factory, it was discovered that there was a problem in determining how to assign work to the machines due to a lack of systematic analysis of production scheduling. In addition, the sequence is based on the planner's experience in the production planning department, so scheduling systems and tools cannot tell if the current sequence is the best one. The researchers then used simulation techniques to create all possible alternatives and identify the best solution for the sequencing process on the machine. The simulation results showed that sequencing using simulation techniques can create a total of 40 possible alternatives. In addition, 96,018.01 seconds is the most ideal total uptime, compared to 208,850 seconds, which is equivalent to an improvement of 54.03%. A Biased-Randomized Simheuristic for a Hybrid Flow Shop with Stochastic Processing Times in the Semiconductor Industry Majsa Ammouriova (Universitat Oberta de Catalunya), Madlene Leißau (University of Applied Sciences Zwickau), Javier Panadero (Universitat Oberta de Catalunya), Christin Schumacher (TU Dortmund University), Christoph Laroque (University of Applied Sciences Zwickau), and Angel A. Juan (Universitat Politècnica de València) Abstract Abstract Compared to other industries, production systems in semiconductor manufacturing have an above-average level of complexity. Developments in recent decades document increasing product diversity, smaller batch sizes, and a rapidly changing product range. At the same time, the interconnections between equipment groups increase due to rising automation, thus making production planning and control more difficult. This paper discusses a hybrid flow shop problem with realistic constraints, such as stochastic processing times and priority constraints. The primary goal of this paper is to find a solution set (permutation of jobs) that minimizes the production makespan. The proposed algorithm extends our previous work by combining biased-randomization techniques with a discrete-event simulation heuristic. This simulation-optimization approach allows us to efficiently model dependencies caused by batching and by the existence of different flow paths. As shown in a series of numerical experiments, our methodology can achieve promising results even when stochastic processing times are considered. Modeling And Simulation Of Fresh Meal Production Kean Dequeant (Gousto) and Stephane Dauzère-Pérès and Claude Yugma (Mines Saint-Etienne, Mines Saint-Étienne, Univ Clermont Auvergne) Abstract Abstract Gousto is one of the leaders of recipe box deliveries in the UK, and is continuously increasing the complexity of its factories to keep growing and to offer more and more choice to its customers. The increased manufacturing complexity making the problems harder to solve, the design of a simulation model has been initialized on a first limited scope, to help the development and testing of line optimisation algorithms. Track Coordinator - Maritime System: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management), Xiuju Fu (Institute of High Performance Computing), Zhuo Sun (Dalian Maritime University) Maritime Systems Maritime Systems Panel Discussion Chair: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management) Hein Thuan Loy (PSA International), Hai Gu (American Bureau of Shipping), Haobin Li (National University of Singapore), Xiuju Fu (Institute of High Performance Computing), Zhuo Sun (Dalian Maritime University), and Xinhu Cao (National University of Singapore) Abstract Abstract The maritime logistics industry plays a critical role in the global supply chain network since ports and ships handle over 80% of trade in volume. Currently, this traditional industry is upgrading from labor-intensive to automated/autonomous operations due to the higher requirement for efficiency and the shortage of skilled human resources. Thus, it calls for rising demands for computer-aided decision-making. As an effective tool for computer-aided decision-making, simulation shows its merits in modeling real-life and non-existant systems. However, there are questions we need to discuss in depth to guarantee simulation results that are useful to industry. For example, why do we need simulation for the maritime industry? What is a good simulation model for maritime industry? How should we position the role of simulation in maritime industry development compared to big data and AI technology development and how may we promote more synergy between simulation and big data technologies? What is the role of simulation for decarbonization and digitalization? Maritime Systems Maritime Systems I Chair: Zhuo Sun (Dalian Maritime University) Simulation Case Study: How Arctic Shipping Shares the Flow of Cargo from Traditional Routes Zhuo Sun and Tao Zhu (Dalian Maritime University) Abstract Abstract The accelerated melting of the Arctic ice makes the opening of the Arctic shipping routes closer and closer, which will also bring about a new pattern of shipping networks. To discuss the effect of the opening of Arctic route on the cargo flow of traditional routes, this study will forecast the ice conditions for the opening of Arctic route, then develop a minimum cost aimed cargo flow assignment model, taking the Arctic route into the world container shipping network and output intuitive results through simulation. A Collision-free Simulation Framework For ASCs In Automated Container Terminals Zhuo Sun and ziyang Qi (Dalian Maritime University) Abstract Abstract To address the collision avoidance problem of automated stacking cranes (ASCs) in the automated container terminal, this paper provides a collision-free simulation framework for ASCs. The framework is constructed based on Spatio-temporal information and enables the effective collision avoidance of multiple ASCs even when different path planning algorithms considering only obstacle avoidance are embedded into the framework. Finally, the effectiveness of the collision-free simulation framework is demonstrated by embedding the A* algorithm and Ant Colony algorithm into the framework. System-level Simulation of Maritime Traffic in Northern Baltic Sea Ketki Kulkarni, Fang Li, Cong Liu, Mashrura Musharraf, and Pentti Kujala (Aalto University) Abstract Abstract Maritime traffic in winter in the Baltic Sea (particularly the northern part) is challenged by heavy ice formation. Icebreaker ships that can provide assistance are a limited resource that need to be shared among all ships. Decision-making for winter navigation systems thus involves monitoring several parameters at both operational and system level, including multiple stochastic parameters. This work presents an integration of ice characteristics, operational level details of ships, and system level details such as traffic flows and icebreaker scheduling through a simulation framework. At the core is a discrete-event simulation model that mimics winter traffic flows in varying ice conditions obtained through meteorological data. This work brings in the novel combination of using ship-level research (operability of individual ships in different ice conditions) as an input in deriving vessel speeds for modelling traffic flows for system-level optimization. Maritime Systems Maritime Systems II Chair: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management) Optimization of Hub-and-Spoke Maritime Network Considering Hub Port Failure Zhuo Sun, Yiwen Su, Kaili Liu, and Ran Zhang (Dalian Maritime University) Abstract Abstract Any abnormal situation of the hub port may bring adverse effects such as time waste and cost increase to the shipping company. This paper solves the hub-and-spoke shipping network transportation optimization problem in the case of partial failure and complete failure of hub ports. Partial failure means that the transshipment demand of the hub port exceeds its capacity, resulting in congestion at the hub port. Complete failure means that the hub port cannot continue to provide services for some reason. The purpose of this paper is to optimize the hub-and-spoke shipping network system to minimize the cost loss of shipping companies in the event of hub port failure. This is a mixed-integer nonlinear programming problem. The simulation example proves that the model in this paper can effectively reduce the loss of cargo flow due to the failure of the hub port and significantly improve the reliability of the shipping network. Simulation-Optimization Approach for Integrated Scheduling at Wharf Apron in Container Terminals Mengyu Zhu, Chenhao Zhou, and Ada Che (Northwestern Polytechnical University) Abstract Abstract As the bridge between land and sea transports, container terminals play a significant role in global trading activities. In order to improve the operational efficiency at the wharf apron, this paper introduced an integrated scheduling problem with the consideration of vehicle dispatching and routing, and developed a time-space network based on takt time. Given the highly uncertain nature of vehicle movement and equipment handshakes, a simulation-optimization approach was developed, which integrates an improved particle swarm optimization and a discrete event simulation model. To further reduce simulation run time, parallel computing was adopted in the algorithm. Numerical experiment shows that the proposed algorithm outperforms the genetic algorithm-based and strategy-based simulation-optimization approach. Combination of Simulated Annealing Algorithm and Minimum Horizontal Line Algorithm to Solve Two-Dimensional Pallet Loading Problem YUCHUAN HU (Dalian Maritime University, Comprehensive Collaborative Innovation Center); YI ZUO (Dalian Maritime University, Collaborative Innovation Center of Maritime Big Data & Artificial General Intelligence); and ZHUO SUN (Dalian Maritime University) Abstract Abstract Major logistics companies have been paying most of attentions on the maximization of pallet usage efficiency. The two-dimensional pallet loading problem can be transformed into the orthogonal arrangement of two-dimensional irregular rectangular pieces. This paper mainly studies the hybrid heuristic algorithm combining simulated annealing (SA) algorithm and the minimum horizontal line (MHL) algorithm to solve the two-dimensional pallet loading problem. Since the local optimization ability of the simulated annealing algorithm is relatively strong, and the global search ability of the lowest level algorithm is relatively strong, the two algorithms are combined to solve the two-dimensional pallet loading problem. Experiments show that the hybrid heuristic algorithm proposed in this paper is nearly 17 times faster than the integer programming model in terms of convergence time, and it is also improved compared with the genetic algorithm in terms of convergence time and space utilization. Maritime Systems Maritime Systems III Chair: Xiuju Fu (Institute of High Performance Computing) Feeder Ship Routing Problem with Tidal Time Windows Yuan Gao (Dalian Maritime University) and Zhuo Sun (Dalian) Abstract Abstract In a shipping network, feeder ships are needed to fulfill the demand for cargo transportation between a hub port and its feeder ports. The objective of the Feeder Ship Routing Problem (FSRP) is to minimize the transportation cost, while the feasible time of entering and leaving a port is affected by tide and load, due to the limitation of the waterway depth. Different from classic VRPTW(Vehicle Routing Problem with Time Windows), the tidal time windows in this study change by the route of the ship, bringing a challenge to solve the problem. This paper studies an FSRP with nonlinear time windows, and solved by Column Generation, after a model simplification by Dantzig-Wolfe Decomposition. Numerical experiment and sensitivity analysis proved that the algorithm is effective as well as that considering tidal influence can effectively reduce the operation cost of the fleet. Yard Template Planning in a Transshipment Hub: Gaussian Process Regression Bonggwon Kang (Pusan National University), Permata Vallentino Eko Joatiko (PT. Bank Rakyat Indonesia Tbk), and Jungtae Park and Soondo Hong (Pusan National University) Abstract Abstract A yard template in a container terminal assigns subblocks for containers with the same departing vessel to reduce a vessel turnaround time with the decreased number of rehandlings. Because vehicle congestion can significantly affect the vessel turnaround time, a terminal operator carefully determines the yard template considering the complex traffic congestion on the entire container terminal. In this study, we propose an application of a Gaussian Process (GP) to predict the vessel turnaround time under the impacts of vehicle interruption and blocking. Based on the predictions, we determine the yard template with the shortest predicted turnaround time among candidate yard templates. Through simulation experiments, we compare the proposed approach and a baseline model based on Mixed Integer Programming (MIP). The simulation results show that the application reduces the vessel turnaround time by 6.66% compared with the baseline model. Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Lars Moench (University of Hagen), Kan Wu (Chang Gung University) MASM MASM Panel: Industry-Academic Collaborations in Semiconductor Manufacturing Chair: John Fowler (Arizona State University) Industry-Academic Collaborations in Semiconductor Manufacturing Chen-Fu Chien (National Tsing Hua University), Stéphane Dauzère-Pérès (École des Mines de Saint-Étienne), Hans Ehm (Infineon Technologies AG), Lars Mönch (Fernuniversität Hagen), Adeline Tay (Micron), Reha Uzsoy (North Carolina State University), Kan Wu (Chang Gung University), and John Fowler (Arizona State University) Abstract Abstract This MASM panel session brings together industry leaders and academic researchers from Asia, Europe, and the United States to discuss the important role of industry-academy collaborations in modeling and analysis of semiconductor manufacturing. Participants from industry and academia will reflect on what worked and what did not work in past collaborative efforts, discuss current collaborations, and recommend new research issues to jointly investigate. Fruitful exchanges with the audience are expected. MASM Artificial Intelligence Applications Chair: Keyhoon Ko (VMS Global, Inc.) Maximizing Throughput, Due Date Compliance and Other Partially Conflicting Objectives Using Multifactorial AI-powered Optimization Holger Brandl, Philipp Rossbach, and Hajo Terbrack (SYSTEMA GmbH) and Tobias Sprogies (NEXPERIA Germany GmbH) Abstract Abstract Semiconductor production is a highly complex interplay of human, machine, material, and method. With state-of-the-art meta-heuristics as the baseline for WIP scheduling, Nexperia Hamburg is exploring more advanced methods to optimize production. By combining simulation with an event-driven dispatcher, a constraint solver, and a digital twin for metrics and monitoring, we have created a powerful testbed to evaluate production planning methods. The integrated digital twin of the selected bottleneck production area provides not only data to drive the planning process but also enables transparency and predictive analytics to allow revealing further unused production potential. In particular, we compare various meta-heuristics, combinatorial optimization, and reinforcement learning. We study how to balance – sometimes conflicting – optimization objectives in an epitaxy production area in a high-throughput wafer fab. To enable line engineers, we propose a robust and transparent scheduling method selection and verification process. Machine Learning Powered Capacity Planning for Semiconductor Fab Keyhoon Ko and Seokcheol Chang (VMS Global, Inc.) and Won-Jun Lee and Byung-Hee Kim (VMS Solutions Co., Ltd) Abstract Abstract Semiconductor wafers are manufactured by stacking hundreds of layers engraved with circuit patterns. Wafer fabrication process with the characteristic of re-entrant flow is a complex job-shop that consists of several work areas such as lithography, etch, and diffusion. Each work area has several workstations with one or more machines that execute the same operation. Capacity planning for a wafer fab is difficult; one must determine the required machine count to meet demands on time. This study proposes a methodology to find the optimal machine count for each workstation using an approach that combines optimization, simulation, and machine learning techniques. The experimental example demonstrates that this approach can systematically provide a good and practical solution. Deep Reinforcement Learning for Queue-time Management in Semiconductor Manufacturing Harel Yedidsion, Prafulla Dawadi, David Norman, and Emrah Zarifoglu (Applied Materials) Abstract Abstract Queue-time constraints (QTC) define a limit on the time that a lot can wait between two process steps in its flow. In semiconductor manufacturing, lots that exceed that time limit experience yield loss, need rework or get scraped. QTCs are difficult to schedule, since a lot needs to wait to be released to the first process step until there is available capacity to process the final step. However, exactly calculating if there is enough capacity is computationally expensive. In this work we propose a deep Reinforcement Learning (RL) method to manage releasing lots into the queue time constraint. We analyze the performance of our RL method and compare it to seven baseline solutions. Our empirical evaluation shows that the RL method outperforms the baselines in five performance metrics including the number of queue-time violations and makespan, while requiring negligible online compute time. MASM Fab Scheduling Chair: Dennis Xenos (Flexciton Limited) Imitation Learning for Real-Time Job Shop Scheduling Using Graph-Based Representation Je-Hun Lee and Hyun-Jung Kim (KAIST) Abstract Abstract Scheduling of manufacturing systems in practice is challenging due to dynamic production environments, such as random job arrivals and machine breakdowns. Dispatching rules are often used because they can be easily applied even in such dynamic manufacturing environments. However, dispatching rules often fail to provide a satisfactory production schedule because they cannot consider overall system states when assigning jobs. Therefore, we develop a real-time scheduling method using imitation learning, especially behavior cloning, to solve job shop scheduling problems. We define a set of available actions, a target optimal policy, and a dynamic graph-based state representation method for imitation learning. The proposed method is size-agnostic, which then can be applied to unseen larger problems. The experimental results show that the proposed method performs better to minimize makespan than other dispatching rules in dynamic job shops. Fab-Wide Scheduling of Semiconductor Plants: A Large-Scale Industrial Deployment Case Study Ioannis Konstantelos, Johannes Wiebe, Robert Moss, and Sebastian Steele (Flexciton); Tina O'Donnell and Sharon Feely (Seagate Technology); and Dennis Xenos (Flexciton) Abstract Abstract This paper presents a novel fab-wide scheduling approach for large semiconductor manufacturing plants. The challenge lies in the scale and complexity of the problem at hand; thousands of wafers must be scheduled on hundreds of machines across several steps while respecting a wide array of operational constraints such as the use of photolithography reticles, timelinks and flow control limits. We begin by surveying the state-of-the art and presenting some key opportunities that arise in the context of global scheduling. The proposed approach is presented, highlighting its hierarchical structure and how it can interface with local toolset schedulers. We present some illustrative examples and aggregate statistics obtained during ongoing trials at Seagate Springtown. We demonstrate that the proposed approach can result in substantial cycle time improvements when compared to myopic dispatch methods and a marked reduction in the need of manual intervention for controlling flows. Monte Carlo Tree Search-based Algorithm for Dynamic Job Shop Scheduling with Automated Guided Vehicles Duyeon Kim and Hyun-Jung Kim (KAIST) Abstract Abstract A dynamic job shop scheduling problem where jobs are transported by automated guided vehicles (AGVs) is considered to minimize the mean flow time. This problem is first modeled with a timed Petri net (TPN) which is widely used for modeling and analyzing discrete event systems. A firing rule of transitions in a TPN is modified to derive more efficient schedules by considering jobs that have not arrived yet and restricting the unnecessary movement of the AGVs. We propose a Monte Carlo Tree Search (MCTS)-based algorithm for the problem, which searches for schedules in advance within a time given limit. The proposed method shows better performance than combinations of other dispatching rules. MASM Time Considerations in Semiconductor Manufacturing Chair: Raphael Herding (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) Rolling Horizon Production Planning in a Borderless Fab Setting Raphael Herding (Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) and Lars Moench (Forschungsinstitut für Telekommunikation und Kooperation e. V., University of Hagen) Abstract Abstract In this paper, we consider a borderless fab scenario where lots are transferred from one semiconductor wafer fabrication facility (wafer fab) to another nearby wafer fab in order to process there certain process steps of the transferred lots. Production planning is carried out individually for each of the wafer fabs. However, the modeling of the available and requested capacity in the production planning models of the two wafer fabs is affected by the lot transfer. Production planning is carried out in a rolling horizon setting. We show by simulation experiments that modeling the capacity in the production planning formulations correctly leads to improved overall profit compared to a setting where the lot transfer is not taken into account on the execution level and in the planning formulations. Criticality Measures for Time Constraint Tunnels in Semiconductor Manufacturing Benjamin Anthouard (Ecole des Mines de Saint-Etienne, STMicroelectronics); Valeria Borodin (Ecole des Mines de Saint-Etienne); Quentin Christ (STMicroelectronics); Stéphane Dauzère-Pérès (Ecole des Mines de Saint-Etienne); and Renaud Roussel (STMicroelectronics) Abstract Abstract Semiconductor manufacturing processes include more and more (queue) time constraints often spanning multiple operations, which impact both production efficiency and quality. After recalling the problem of time constraint management, this paper focuses on the notion of criticality defined in terms of time constraints at an operational decision level. Various criticality measures are presented. A discrete-event simulation-based approach is used to evaluate the criticality of machines for time constraints. Computational experiments conducted on industrial instances are discussed. The paper ends with some conclusions and perspectives. Putting A Price Tag On Hot Lots And Expediting In Semiconductor Manufacturing Philipp Neuner, Stefan Häussler, Julian Fodor, and Gregor Blossey (University of Innsbruck) Abstract Abstract A common practice in semiconductor manufacturing is to give higher priority to certain "hot lots" to reduce their cycle time and deliver them on time. Despite good performance of these high priority lots, expediting might worsen the overall performance of the fab due to decelerating all other lots. Thus, this paper uses a simulation model of a scaled-down wafer fabrication facility, to put a price-tag on hot lots and expediting measures to derive suggestions for decision makers on (i) how much additional profit per hot lot is required to compensate for increasing cost due to introducing hot lots, and (ii) the allowable maximum expediting cost per period. MASM Automated Material Handling Systems Chair: Young Jae JANG (Korea Advanced Institute of Science and Technology, Daim Research) Industry Case: Semiconductor Fab OHT Management System and Digital Twin for OHT Operation Sangpyo Hong, Illhoe Hwang, and Seol Hwang (Daim Research) and Young Jae Jang (KAIST) Abstract Abstract The overhead hoist transport (OHT) system, which is the primary automated material handling system (AMHS) in semiconductor fab. We present the framework of the OHT management system (OMS) which effectively controls and manages more than 1,000 OHT vehicles with reinforcement learning algorithm. In this presentation, we focus on the commercialization of the OMS based on novel algorithms and explain its basic architecture. We also present the Digital Twin Solution of the OMS for testing and verification. The OMS Digital Twin (OMS-DT) consists of a vehicle emulator, OMS connectors, and a virtual factory. We demonstrate how this virtual system has been used by major semiconductor fab manufacturers and has ultimately improved OHT deployment processes. Anomaly Detection for OHT System in Semiconductor Fab Jinhyeok Park (Daim Research), Jiyoon Myung (Samsung SDS), Munki Jo (SK HYNIX), and Sujeong Baek (Hanbat National University) Abstract Abstract The material move between processing machine in semiconductor fabrication facilities (FABs) are mainly performed by an overhead hoist transfer (OHT) system, which consists of OHT vehicles and guided tracks. Because the OHT system operates as one-way traffic on most tracks, sudden failure of a OHT system causes the obstruction of other OHT vehicle passages and serious material-flow efficiency degradation. We propose an anomaly detection for OHT sysem with multiple time-series sensor signals collected from a OHT vehicle. We developed recurrent neural network-based autoencoder considering characteristics of a moving object. We demonstrate our experimental setup such as the testbed environment, collected multiple sensor signals, and model establishment. We also show the verification processes and how the IoT board and abnormality system can be applied to actual OHT systems operated in FAB environments. A New AMHS Testbed for Semiconductor Manufacturing Kwanwoo Lee, Siyong Song, Daesoon Chang, and Sangchul Park (Ajou University) Abstract Abstract This paper proposes a new dataset of semiconductor fab simulation models. As the size and complexity of FABs increase, it is essential to operate automated material handling systems (AMHS) efficiently. Although the latest testbed, SMT2020 (Semiconductor Manufacturing Testbed2020), is realistic in scale and complexity for modern semiconductor FABs, there is a lack of details on AMHSs. Therefore, we developed details for an AMHS simulation model with sufficient scale and complexity. A full description of the simulation model features is provided, compared with an assumption about transport times in SMT2020. This study aims to provide a test environment for researchers to test operation and logistics in a semiconductor FAB. The prototype of the proposed testbed is open to public use. MASM Semiconductor Manufacturing Equipment Chair: Dean Chu (National Taiwan University) Equipment Prognosis with Ghost Point Resampling Method Under Imbalanced Data Restriction Rifa Arifati and Jakey Blue (Institute of Industrial Engineering National Taiwan University) Abstract Abstract Fault detection and classification are crucial in equipment monitoring since it helps to prevent unexpected breakdowns. Nowadays, this task is implemented by massively applying various machine learning and deep learning techniques to detect faults as early as possible. In addition to the semiconductor manufacturing complexity, such as the sensor data being in the form of multivariate time series of variant lengths, one challenge is that the process data are very often (extremely) imbalanced. This research proposes a Ghost Point Resampling methodology which consists of kernelizing the FDC data, calculating distance measures, upper sampling the minority class, and finally classifying the faults in the kernel space. The effectiveness of doing data augmentation in the invisible kernel space will be demonstrated by case studies. Predictive Equipment Health Based on Hidden Markov Model and Production Scheduling Dean Chu and Jakey Blue (Institute of Industrial Engineering, National Taiwan University) and Stéphane Dauzère-Pérès (Mines Saint-Etienne, Université Clermont Auvergne) Abstract Abstract Accurate monitoring of equipment health is one of the critical functions of intelligent manufacturing. Considering the equipment condition is interesting not only for maintenance but also for efficient production planning. Consequently, calculating the Remaining Useful Life (RUL) becomes an extension to support equipment prognostics. A two-phased Predictive Equipment Health with hidden Markov Models (PEHMM) is proposed in this research. The offline behavior learning phase brings together all historical manufacturing data into the HMM to study the dynamics of the equipment condition. Then, the online prognostic phase predicts short-/long-term indices given complete or partial information. By looking into the duration and transition between HMM states, the evolution of the equipment health can be traced, and the impact on the yield can be reduced accordingly. Furthermore, the PEHMM is expected to enhance the decision quality on the scheduling/dispatching of lots in production, the equipment utilization, and the monitoring of production yield. Dispatching to Improve Chemical Usage at Cleans Tools in a Semiconductor Fab BINAY DASH, SHILADITYA CHAKRAVORTY, SUSHIL PATIL, ALTHEA ROSEN, and RYAN NOLTE (GLOBALFOUNDRIES) Abstract Abstract In this effort, we discuss a scenario in the cleans area in semiconductor manufacturing, where optimizing the work in progress management strategy was important to keep the overall cost at minimum. There were various conflicting cost component behaviors. We discuss how by using simulation and studying the behavior of the important performance parameters versus changes in input controllable factors, we were able to drive operational improvement and cost reduction. The simulation approach helped the project team to present the results to stakeholders and proceed with physical tests and final implementations. MASM Batch Processing Chair: Kan Wu (Chang Gung University) Scheduling Jobs with Uncertain Ready Times on a Single Batch Processing Machine Jens Rocholl, Fajun Yang, and Lars Moench (University of Hagen) Abstract Abstract In this paper, we consider a scheduling problem for a single batch processing machine in semiconductor wafer fabrication facilities (wafer fabs). A batch is a group of jobs that are processed at the same time on a single machine. The jobs belong to incompatible families. Only jobs of the same family can be batched together. Each job has a weight, a due date, and a ready time. The performance measure of interest is the total weighted tardiness (TWT). The ready times are calculated based on information related to upstream machines that is stored in a Manufacturing Execution System (MES). Therefore, they can be uncertain. We propose a genetic algorithm (GA)-based approach. Sampling is used to take into account the uncertainty of the ready times. Results of computational experiments are reported that demonstrate that this approach performs well with respect to computing time and solution quality. Job Scheduling of Diffusion Furnaces in a Semiconductor Fab Kan Wu (Chang Gung University) Abstract Abstract Furnaces are commonly seen in the front-end to the middle portion of the semiconductor process flow. Job scheduling of furnaces needs to meet the daily production targets while adhering to job due dates and process constraints. The furnace scheduling problem belongs to a special class of flexible job-shop scheduling with complicated constraints including but not limited to batch processing, reentrance, and time-windows. This problem is NP-hard. The extremely large solution space prevents any straightforward application of optimization techniques. In this paper, several properties are identified to reduce the solution space based on a dynamic programming formulation. With the help of these properties, an efficient algorithm has been developed to find a good solution to this problem. The developed method has been implemented in practical production lines. Compared with existing methods, the developed algorithm gives a higher throughput rate and improves the scheduling efficiency. Learning Dispatching Rules for Energy-aware Scheduling of Jobs on a Single Batch Processing Machine Daniel Sascha Schorn and Lars Moench (University of Hagen) Abstract Abstract In this paper, we consider a scheduling problem for a single batch processing machine in semiconductor wafer fabrication facilities (wafer fabs). An integrated objective function that combines the total weighted tardiness (TWT) and the electricity cost (EC) is considered. A time-of-use (TOU) tariff is assumed. A genetic programming (GP) procedure is proposed to automatically discover dispatching rules for list scheduling approaches. Results of designed computational experiments demonstrate that the learned dispatching rules lead to high-quality schedules in a short amount of computing time. MASM Simulation of Semiconductor Manufacturing Chair: Patrick Christoph Deenen (Eindhoven University of Technology, Nexperia) Application Of A Simulation Model To Forecast Cycle Time Based On Static Model Input Syahril Ridzuan Ab Rahim (Industrial Engineering), Wei Jin Lee (Globalfoundries), Gwendolene Qin Ling Haw (Industrial Engineering), and Oliver Diehl (D-SIMLAB Technologies Pte Ltd) Abstract Abstract The semiconductor wafer fabrication industry is having the challenge to address increasing demand during the global chip shortage. The semiconductor industry is increasing its capacity utilization while maintaining its cycle time to address this issue. Currently, static capacity modelling is used to calculate capacity utilization. Due to the complexity of the cycle time calculation, the existing static capacity model is unable to estimate cycle time. One of the pain points for cycle time modelling is the preparation and maintenance of the real-time data as inputs into a simulation model. Therefore, this paper aims to demonstrate how the dynamic capacity engine (DCE) utilizes most of the ready input data from static capacity modelling to forecast fab cycle time. The high accuracy of the DCE is achieved and validated with the actual input (wafer start, actual WIP) and output (dynamic cycle time) which will be discussed in this paper. Graph Representation and Embedding for Semiconductor Manufacturing Fab States Benedikt Schulz and Christoph Jacobi (Karlsruhe Institute of Technology (KIT)), Andrej Gisbrecht and Angelidis Evangelos (Robert Bosch GmbH), and Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd) Abstract Abstract Due to the enormous complexity of semiconductor manufacturing processes, tasks like performance analysis, forecasting, and production planning and control necessitate detailed knowledge about the current state of the manufacturing system. Usually, models and methods for these tasks incorporate feature selection and engineering to extract relevant feature sets from the vast number of available features. However, sets of independent features may not retain structural information that captures interdependencies between entities. To address this challenge, a graph representation model for semiconductor manufacturing fabs that captures structural information, such as the interdependencies of machines, lots, and routes, is presented. The model comprises the essential procedures in semiconductor manufacturing processes, namely process flows, material transfer, setup, and maintenance activities. Finally, we use representation learning to embed graph snapshots into a low-dimensional space. These embeddings can serve as input for a scheduling engine or a performance analysis tool. Building a Digital Twin of the Photolithography Area of a Real-World Wafer Fab to Validate Improved Production Control Patrick Christoph Deenen and Rick Arnold Maria Adriaensen (Eindhoven University of Technology, Nexperia) and John W. Fowler (Arizona State University) Abstract Abstract Since the photolithography area is generally the bottleneck of a wafer fab, effective scheduling in this area can increase the performance of the complete fab significantly. However, the potential benefit of proposed solution methods is often validated in a static and deterministic scheduling setting, while the manufacturing environment is dynamic and stochastic. In this paper, we build a discrete-event simulation model based on real-world data of the photolithography area, which can be used to accurately determine the performance of new scheduling solutions. A case study at a global semiconductor manufacturer is presented. The simulation model, a so-called digital twin, captures the vast majority of the stochastic behavior such as the arrival of jobs, processing times, setup times and machine downs. In addition, a dispatching heuristic is developed to replicate the current practice of production control. Both the simulation model and the dispatching heuristic are validated and shown to be accurate. MASM Photolithography Scheduling Chair: Cathal Heavey (University of Limerick) Demonstration of the Feasibility of Real Time Application of Machine Learning to Production Scheduling Amir Ghasemi (Amsterdam University of Applied Sciences), Kamil Erkan Kabak (Izmir University of Eco- nomics), and Cathal Heavey (university of limerick) Abstract Abstract Industry 4.0 has placed an emphasis on real-time decision making in the execution of systems, such as semiconductor manufacturing. This article will evaluate a scheduling methodology called Evolutionary Learning Based Simulation Optimization (ELBSO) using data generated by a Manufacturing Execution System (MES) for scheduling a Stochastic Job Shop Scheduling Problem (SJSSP). ELBSO is embedded within Ordinal Optimization (OO), where in the first phase it uses a meta model, which previously was trained by a Discrete Event Simulation model of a SJSSP. The meta model used within ELBSO uses Genetic Programming (GP)-based Machine Learning (ML). Therefore, instead of using the DES model to train and test the meta model, this article uses historical data from a front-end fab to train and test. The results were statistically evaluated for the quality of the fit generated by the meta-model. Deployment of an Advanced A.i. Scheduler at Photolithography: a Seagate Technology Use Case Dennis Xenos and Robert Moss (Flexciton Ltd) and Tina O'Donnell (Seagate) Abstract Abstract Photolithography is the cutting edge of semiconductor manufacturing and requires the most complex and expensive equipment. Photolithography tools are usually the bottleneck area because of the complexities around their operation including the allocation of secondary resources such as reticles to the tools. Reticles are fragile and expensive so the minimisation of the reticle movements helps to mitigate the risk of damaging them, but may sacrifice the fab's fundamental objective of increasing throughput. Our advanced A.I. scheduler can adapt and optimise photolithography tools with a variety of constraints and secondary resources. The scheduler has been implemented in various photolithography toolsets at the Seagate facility in Northern Ireland. It has been running 24/7 for more than a year. The first results from production show a 9.4% increase in lot moves, 4.3% reduction in lot queue time and 5.3% decrease in reticle moves at the same time. Study of Relationships Between Scheduling Objectives In Semiconductor Manufacturing Jérémy Berthier (STMicroelectronics, Mines Saint-Étienne); Stéphane Dauzère-Pérès (Mines Saint-Étienne, BI Norwegian Business School); Claude Yugma (Mines Saint-Étienne); and Alexandre Lima and Rémi Poinas (STMicroelectronics) Abstract Abstract In semiconductor manufacturing, scheduling problems addressed at the operational level involve a rich set of constraints and criteria. As a result, multi-objective optimization algorithms are increasingly preferred over dispatching rules, especially in complex manufacturing areas. This article investigates the relationships between several scheduling objectives considered in the photolithography area. The criteria are first presented and then compared two by two. To this effect, the notion of dominated objective function is introduced. Various relationships are shown in the general case along with different counterexamples. In addition, an experimental analysis is proposed on industrial instances of the photolithography area to assess the conflict level between objectives, but also to confirm the multi-objective approach. Finally, some perspectives are provided. MASM Production Planning Chair: Tobias Voelker (University of Hagen) Optimizing Product Mix Profile for Maximum Output and Stable Line Perfomance in a Giga Fab Georg Seidel (Infineon Technologies Austria AG); Chien Yong Low, Mun Hoe Hooi, Soo Leen Low, and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd); and Birgit Hoelscher (Infineon Technologies GmbH) Abstract Abstract Chip manufacturers are trying recently to maximize chip output with their existing capacity due to the worldwide chip shortage, because expanding capacity within a short time span is not a practical option. But many times maximizing chip output results in unstable production line performance, and creates cyclical actions of increasing and cutting production volume, which does not necessarily lead to maximum output over a period of time. In this paper, we present an approach that combines a greedy optimization algorithm, a static capacity model, and a dynamic simulation model to maximize production output, and ensuring stable production line performance. Our approach managed to achieve an 11% output increase as compared to a manual planning effort, and at the same time, achieved a stable dynamic flow factor. Impact of Production Planning Approaches on Wafer Fab Performance During Product Mix Changes Tobias Voelker and Lars Moench (University of Hagen) and Reha Uzsoy (North Carolina State University) Abstract Abstract We present the results of a series of simulation experiments examining the impact of product mix changes on global performance measures such as costs and profit. In these experiments, we apply three production planning models in a rolling horizon setting that differ in their anticipation of shop-floor behavior. The first two are based on exogenous, i.e. fixed, workload-independent lead times, while the third uses non-linear clearing functions to represent workload-dependent lead times. The simulation results clearly demonstrate the benefit of production planning models that correctly anticipate the queueing behavior of the wafer fab. Towards Decentralized Decisions for Managing Product Transitions in Semiconductor Manufacturing Carlos Leca (North Carolina State University), Karl Kempf (Intel Corporation), and Reha Uzsoy (North Carolina State University) Abstract Abstract Continuous renewal of the product portfolio through product transitions is crucial to semiconductor manufacturing firms. These decisions take place in a decentralized environment, where decisions by different functional units must be coordinated to optimize corporate performance. Starting from a centralized optimization model, we obtain decentralized models by applying a Lagrange Relaxation. We explore the challenges encountered in formulating and solving these decentralized models. Although the Lagrangian approaches yield tight upper bounds on the optimal solution value, the structure of the dual solution renders the construction of a near-optimal feasible solution difficult, and that fully separable decentralized models encounter significant problems in achieving convergence due to scaling issues. We present computational experiments that illustrate the difficulties involved, and discuss directions for future work. MASM Energy Considerations in Semiconductor Manufacturing Chair: Gabriel Weaver (Idaho National Laboratory); John Hasenbein (The University of Texas at Austin) Energy-efficient Semiconductor Manufacturing: Establishing an Ecological Operating Curve Anna Hopf (Infineon Technologies AG), Daniel Schneider (Technische Universität München), Abdelgafar Ismail and Hans Ehm (Infineon Technologies AG), and Gunther Reinhart (Technische Universität München) Abstract Abstract Latest governmental policies aim to mitigate the carbon impact on climate and accelerate the transition towards carbon neutrality by imposing stronger regulations for companies. The semiconductor industry emits carbon dioxide caused by its large amounts of consumed energy. At the same time, machine sensors tracking consumption are rare, and the share of fixed and variable energy consumption is often unknown. To detect the individual energy consumption types of a wafer fab, a process- and infrastructure-oriented discrete-event simulation model is developed that serves as a tool to determine the plant energy consumption within a fab. The obtained shares are validated with existing data. In parallel, a novel enery efficiency curve is constructed and verified by extending the concept of the well-studied Operating Curve. It incorporates the relationship between utilization and energy efficiency and adds an ecological viewpoint to the so far only economically motivated concept. A Planning Model for Incorporating Renewable Energy Sources Into Semiconductor Supply Chains Michael Werner and Lars Moench (University of Hagen) and Jei-Zheng Wu (Soochow University) Abstract Abstract In this paper, we discuss a long-term planning approach for semiconductor supply chains. Since a single semiconductor wafer fabrication facility (wafer fab) needs a huge amount of energy to work as intended, the approach considers sustainability goals and production-oriented objectives. We are interested in determining how many wind turbines and solar photovoltaics need to be installed in a wafer fab to obtain a certain penetration of renewable energy while the prescribed demand is met. A mixed integer linear programming (MILP) model is established for a set of wafer fabs that work in parallel. Computational experiments are carried out to demonstrate the behavior of the model under certain experimental conditions. Simulating Energy and Security Interactions in Semiconductor Manufacturing: Insights from the Intel Minifab Model Gabriel Weaver (Idaho National Laboratory), Jacob Shusko (The University of Texas at Austin), Gonzalo Medina (The University of Texas at San Antonio), John Hasenbein (The University of Texas at Austin), Krystel Castillo-Villar (The University of Texas at San Antonio), Erhan Kutanoglu (The University of Texas at Austin), and Paulo Costa (George Mason University) Abstract Abstract Semiconductor manufacturing, particularly wafer fabrication, is a highly complex system of processes and workflows. Fabrication facilities must deal with re-entrant flows to support multiple types of wafers being produced simultaneously, each with their own deadlines and specifications. The manufacturing process itself depends upon the ability to control and programmatically adjust a variety of environmental conditions. In addition, wafer fabrication consumes large amounts of energy, particularly electricity. Emerging technologies including networked devices may help reduce the energy footprint but can introduce cybersecurity risks. Therefore, this paper presents a modeling and simulation framework to quantify tradeoffs between operational measures of performance, energy consumption, and cybersecurity risks. We augment the Intel Minifab model with an Industrial Control Systems (ICS) reference model based on the Purdue Enterprise Reference Architecture (PERA) as well as tool-level energy consumption data from a semiconductor manufacturing testbed. MASM Scheduling Assembly/Test Operations Chair: Christian John Immanuel Boydon (National Taiwan University) Multi-agent Framework for Intelligent Dispatching and Maintenance in Semiconductor Assembly and Testing Christian John Immanuel Boydon, Bin Zhang, and Cheng-Hung Wu (National Taiwan University) Abstract Abstract This research presents a multi-agent framework for real-time dynamic dispatching and preventive maintenance (DDPM) of unrelated parallel machines in semiconductor assembly and testing. Uncertain job arrivals and random machine health deterioration are considered. Markov decision processes solve the DDPM problem, but are limited by the curse of dimensionality for large systems. To overcome this challenge, a multi-agent framework is designed to decompose the large DDPM problem into single-machine DDPM subproblems whose optimal strategies are then coordinated globally. Simulation studies show that cycle time is improved by at least 13.3% compared with traditional dispatching rules for large systems under a wide range of production settings. Autonomous Scheduling in Semiconductor Back-End Manufacturing Jelle Adan and Alp Akcay (Eindhoven University of Technology), Michael Geurtsen (Nexperia), Marc Albers (Eindhoven University of Technology), and John Fowler (Arizona State University) Abstract Abstract Production scheduling decisions have a large impact on efficiency and output, especially in complex environments such as those with sequence- and machine-dependent setup times. In practice, these scheduling problems are usually solved for a fixed time ahead. In semiconductor back-end manufacturing, given the dynamics of the environment, it is commonly observed that a schedule is no longer optimal soon after it is made. Here, we propose time-based rescheduling heuristics that can mitigate the effect of these deviations from the schedules. We build a simulation model to represent the dynamics of the shop floor as well as its interaction with the upper management level that decides how orders are released. The simulation model, which is built and validated using real-world data, enables us to evaluate the performance of the rescheduling heuristics. By comparing the results to the case without rescheduling, it is shown that rescheduling can significantly improve relevant performance measures. Evaluating Scheduling Period in Training Data Collection for Situation Aware Dispatching Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd) and Wentong Cai (Nanyang Technological University) Abstract Abstract Earlier studies have shown that adapting dispatch rules to manufacturing situations improves factory performance. A trained machine learning model has been used to learn the relationship between manufacturing situations and dispatch rules. In order to generate training data for the model, a multi-pass simulation technique has been used to evaluate candidate dispatch rules under certain manufacturing situations. The candidate dispatch rule is applied within a fixed scheduling period, such as daily or weekly, to evaluate the factory’s performance in the scheduling period. However, the scheduling period chosen should be able to differentiate the impact on the factory’s performance of candidate dispatch rules. A short scheduling period will result in redundant training data, but a long scheduling period will result in insufficient training data. In this study, we evaluate the effect of the scheduling period on the accuracy of the trained machine learning model. MASM MASM Keynote Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Industry 4.0 Innovation in Semiconductor Planning and Operations Koen De Backer (Micron) Abstract Abstract Industry 4.0 innovations in smart manufacturing and AI have demonstrated impact at scale in manufacturing semiconductor industry. The big data technology concept was started in quality and yield improvement. However, much attention was not given to planning and scheduling of semiconductor industry, which is a key of productivity improvement and customer demand satisfaction. Micron Technology is a world leader of semiconductor memory and data storage and has an initiative to drive Industry 4.0 for planning and scheduling pillar with smart manufacturing and artificial intelligence. This keynote will introduce the evolutionary steps of planning and scheduling systems to support smart manufacturing and digital transformation that leads to streamlined planning and scheduling to drive the productivity improvement and demand satisfaction using digital twin, optimization, artificial intelligence. MASM Semiconductor Supply Chains Chair: Jan-Philip Erdmann (Infineon Technologies AG) Demand Predictability Evaluation for Supply Chain Processes Using Semantic Web Technologies Use Case Nour Ramzy, Philipp Ulrich, Lancelot Mairesse, and Hans Ehm (Infineon Technologies AG) Abstract Abstract Semantic Web technologies provide the possibility of a common framework to share knowledge across supply chain networks. We explore Semantic Web technologies to evaluate processes’ demand predictability through a use case. First, we create an ontology describing the relevant domain concepts and data and define competency questions based on the need of the use case. Then, we map the data to the ontology in a knowledge graph. We design a chain of SPARQL queries to retrieve and insert information from the knowledge graph to answer the competency questions. We calculate the underlying demand for supply chain processes using aggregations and the created semantic description. We successfully computed a pre-defined metric for demand predictability for different time scopes and process groups: the mean of yearly coefficient of variation. Using this approach, one could perform predictability evaluations for relevant indicators of end-to-end supply chains by further data integration in the semantic framework. Simulated-Based Analysis Of Recovery Actions Under Vendor-Managed Inventory Amid Black Swan Disruptions In The Semiconductor Industry: A Case Study From Infineon Technologies AG Manuel Fernando Lopera Diaz, Hans Ehm, and Abdelgafar Ismail (Infineon Technologies AG) Abstract Abstract The current pandemic outbreak unlike other types of events has impacted many firms' supply and demand with unprecedented consequences. The scope of these effects greatly depends on the characteristics of the industry. This research evaluates the performance of a specific implementation of vendor-managed inventory (VMI) in a case study from a semiconductor company. A multi-period, multi-echelon serial supply chain consisting of the customer VMI warehouse (facing the end demand), the supplier distribution center and the supplier manufacturing plant is studied with agent-based and discrete event simulation. The results suggest that the severity of the demand reduction plays a big role in the replenishment process of the VMI, creating a bullwhip effect which reduces the speed of recovery. The behavior of the customer in terms of the quality of the forecast and whether or not it is been inflated allows the supplier to better plan when dealing with limited capacity. Increasing Supply Chain Robustness during Allocation in a Just-in-Time Supply Set-Up Volker Dörrsam, Jan-Philip Erdmann, and Patrick Moder (Infineon Technologies AG) Abstract Abstract In situations of scarcity, that is when demand exceeds available supply, a stable allocation of capacities among customers contributes to a more robust supply chain behavior. Given the input of available capacities in the first place, this paper presents an analytical approach that models its smooth tactical distribution by accounting for product-level deviations. The proposed model may serve as input for allocation determination in situations with demand surges. Simulating the model and conducting experiments using real-world data from a globally acting semiconductor manufacturer, it provides empirical evidence of results in terms of supply chain stability. Still, the proposed model ensures sufficient flexibility due to well-defined target inventory levels. Track Coordinator - Military and National Security Applications: Nathaniel D. Bastian (United States Military Academy), James Starling (U.S. Military Academy) Military and National Security Applications Remote Military and National Security Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Supervised Machine Learning for Effective Missile Launch Based on Beyond Visual Range Air Combat Simulations Joao P. A. Dantas, Andre N. Costa, Felipe L. L. Medeiros, and Diego Geraldo (Institute for Advanced Studies) and Marcos R. O. A. Maximo and Takashi Yoneyama (Aeronautics Institute of Technology) Abstract Abstract This work compares supervised machine learning methods using reliable data from beyond visual range air combat constructive simulations to estimate the most effective moment for launching missiles. We employed resampling techniques to improve the predictive model, and we could identify the remarkable performance of the models based on decision trees and the significant sensitivity of other algorithms. The models with the best f1-score brought values of 0.379 and 0.465 without and with the resampling technique, respectively, which is an increase of 22.69%, and an appropriate time inference. Thus, if desirable, resampling techniques can improve the model’s recall and f1-score with a slight decline in accuracy and precision. Therefore, through data obtained through constructive simulations, it is possible to develop decision support tools based on machine learning models, which may improve the flight quality in BVR air combat, increasing the effectiveness of offensive missions to hit a particular target. A Meta-Heuristic Solution Approach to Isolated Evacuation Problems Klaas Fiete Krutein, Anne Goodchild, and Linda Ng Boyle (University of Washington) Abstract Abstract This paper provides an approximation method for the optimization of isolated evacuation operations, modeled through the recently introduced Isolated Community Evacuation Problem (ICEP). This routing model optimizes the planning for evacuations of isolated areas, such as islands, mountain valleys, or locations cut off through hostile military action or other hazards that are not accessible by road and require evacuation by a coordinated set of special equipment. Due to its routing structure, the ICEP is NP-complete and does not scale well. The urgent need for decisions during emergencies requires evacuation models to be solved quickly. Therefore, this paper investigates solving this problem using a Biased Random-Key Genetic Algorithm. The paper presents a new decoder specific to the ICEP, that allows to translate in between an instance of the S-ICEP and the BRKGA. This method approximates the global optimum and is suitable for parallel processing. The method is validated through computational experiments. An Application of Automated Machine Learning within a Data Farming Process Lynne Serre and Maude Amyot-Bourgeois (Defence Research and Development Canada) Abstract Abstract Data farming is a simulation-based methodology used within the defense community to analyze complex systems and provide insights to decision makers. It can produce very large, multi-dimensional data sets that require sophisticated analysis tools, such as metamodeling. Advances in explainable artificial intelligence have expanded the types of metamodels that can be considered; however, constructing a well-fitting machine learning metamodel involves many tasks that can become time consuming for an analyst. Automated machine learning (autoML) can save an analyst time by automating metamodel training, tuning and testing. Using outputs of an agent-based simulation of a military ground-based air defense scenario, we compared the performance of metamodels trained using autoML and different experimental designs. We found that autoML can reasonably automate the construction of metamodels and adds robustness to the analysis by considering multiple types of metamodels; however, the type and size of experimental design can significantly impact metamodel performance. Military and National Security Applications Military Keynote Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) In Pursuit of Defence AI and Autonomous Systems Robert Hunjet (Department of Defence) Abstract Abstract With rapid advances in Artificial Intelligence, we are seeing increased maturity in Robotics, and Autonomous Systems and Artificial Intelligence (RASAI) offerings, ranging from drones, through to higher degrees of autonomy in cars. But where are they in the military context? These systems are promised to remove humans from harm’s way. Why aren’t they ubiquitous in Defense? Building Defense RASAI is a tough ask. This talk explores some of the reasons why. It offers a perspective on why we are yet to see highly capable AI and Autonomous systems deployed in Defense scenarios; then explores how modelling and simulation might aid in the development and training of such systems. Military and National Security Applications Air-Defense and Naval Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) An Anomaly In Intercept Time For Short Range Ballistic Re-entry Vehicles Bao Uyen Nguyen, Maude Amyot-Bourgeois, and Brittany C. Astles (Defence Research and Development Canada) Abstract Abstract This paper documents an anomaly in intercept time of a ballistic re-entry vehicle (RV) by a ballistic interceptor. Intuitively, it is expected that as soon as an incoming RV is detected, the defense will launch an interceptor. However, we show that for short range, lofted RV trajectory and an interceptor slower than the RV, it is best to delay the launch of the interceptor for a minimal intercept time. By minimizing the intercept time, the RV is intercepted earlier, further away from the defense location. Additionally, minimal intercept times may maximize the number of engagement opportunities. This will allow the defense to improve the probability of raid negation (the probability of neutralizing all incoming RVs). We will show how to minimize the intercept time using analysis in the phase space (velocities of the RV and the interceptor) and validate the results using MANA (Map-Aware Non-uniform Automata). Intercept Considerations for Devising a Dipping Sonar Search Strategy to Locate an Approaching Submarine Peter Joseph Young (DRDC CORA, CFMWC ORT) Abstract Abstract A hostile conventional submarine will attempt to get within close enough range to a ship so as to launch a torpedo. To counter this, an Anti-Submarine Warfare (ASW) helicopter with dipping sonar capability can be used to search for an approaching submarine. In situations where earlier contact information of the submarine by an external source is available, intercept trajectory considerations can be used to determine an intercept zone for the submarine from which it can attack the ship. This information can then be used to devise a search strategy for the helicopter to locate the submarine, after which counter measures can be taken against the submarine. This problem has been investigated using a naval combat modelling environment, with results of the methodology development, implementation and analysis reported here. Military and National Security Applications Military Communications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Message Prioritization in Contested and Dynamic Tactical Networks using Regression Methods and Mission Context Rohit Raj Gopalan, Md Hedayetul Islam Shovon, and Benjamin Campbell (Defence Science and Technology); Vanja Radenovic, Kym McLeod, and Leith Campbell (Consilium Technology); and Dustin Craggs and Claudia Szabo (University of Adelaide) Abstract Abstract Military communications at the tactical edge consist of unreliable, disrupted, and limited bandwidth networks, which can lead to the delay and loss of critical information. These networks are increasingly being used for the transmission of digital command and control (C2) information, requiring timely and accurate transmission, and play a vital role in the outcome of military operations. Machine Learning (ML) techniques have the potential to improve operational outcomes by autonomously prioritizing the delivery of the most important information through these networks, using observations of the current mission and network state. This paper covers the experimental process and the operational metric used for comparison between the ML and a non-ML approach that sorts messages in a fixed order. We present two regression- based supervised-learning methods that were shown to be more effective in both medium and high congested networks than the non-ML approach. Robustness of Middleware Communication in Contested and Dynamic Environments Claudia Szabo (University of Adelaide, The University of Adelaide); Dustin Craggs and Dumitru Alin Balasoiu (University of Adelaide); Vanja Radenovic (Consilium Technology); and Benjamin Campbell (Defence Science and Technology Group) Abstract Abstract Contested and dynamic environments with poor and unreliable network conditions are often encountered by military operations or crisis management systems. While it is a common occurrence for networks to behave poorly, or for system nodes to malfunction, most existing decision making algorithms have only been tested in perfect conditions. In this paper, we present an analysis of the robustness of reinforcement learning and evolutionary approaches employed in a communication middleware that operates in a contested environment. The SMARTNet middleware prioritizes and controls the messages sent by each node, with the aim of preserving network bandwidth. We evaluate the robustness of a reinforcement learning and an evolutionary computation implementations as SMARTNet executes in changing conditions, with nodes dropping out, and the network becoming congested both generally and for specific message types. Jeopardy Assessment for Dynamic Configuration Of Collaborative Microservice Architectures Glen Pearce (Defence Science and Technology Group); Alexis Pflaum and Dumitru Alin Balasoiu (University of Adelaide); and Claudia Szabo (University of Adelaide, The University of Adelaide) Abstract Abstract Microservice architectures, which are lightweight, flexible, and adapt easily to changes, have recently been considered for system development in military operations in contested and dynamic environments. However, in a military setting, the dynamic configuration of collaborative microservices execution becomes critical, and testing that microservice configurations behave as expected becomes paramount. In this paper, we propose a complex jeopardy metric and reconfiguration process that dynamically configures collaborative algorithms running on multiple nodes. Our metric and proposed scenarios will allow for the automated evaluation of microservice configurations and their re-configuration to suit operational needs. We evaluate our proposed scenario, metric, and various reconfiguration algorithms to show the benefits of this approach. Military and National Security Applications Military Operations Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) AI-based Military Decision Support Using Natural Language Michael Möbius and Daniel Kallfass (Airbus Defence and Space GmbH), Thomas Manfred Doll (Army Concepts and Capabilities Development Centre), and Dietmar Kunde (German Army Headquarters) Abstract Abstract To mimic a realistic representation of military operations, serious combat simulations require sound tactical behavior from modeled entities. Therefore, one must define combat tactics, doctrines, rules of engagement, and concepts of operation. Reinforcement learning has been proven to generate a broad range of tactical actions within the behavioral boundaries of the involved entities. In a multi-agent ground combat scenario, this paper demonstrates how our artificial intelligence (AI) application develops strategies and provides orders to subsidiary units while conducting missions accordingly. We propose a combined approach where human knowledge and responsibility collaborate with an AI system. To communicate on a common level, the orders and actions imposed by AI are given in natural language. This empowers the human operator to act in a human-on-the-loop role in order to validate and evaluate the reasoning of AI. This paper showcases the successful integration of natural language into the reinforcement learning process. Experimenting with the Mosaic Warfare Concept Erdal CAYIRCI (Joint Warfare Training Center, Dataunitor AS) and Ramzan AlNaimi, Sara Salem AlNabet, Sarah Abdulla AlAli, and Sara Mubarak AlHajri (Joint Warfare Training Center) Abstract Abstract Mosaic warfare is a warfighting theory suggesting that a force made up of a larger number and variety of agile, fluid and scalable weaponry, sensors and platforms is more effective and resilient than a force developed following system of systems approach. Each member of a mosaic force is as distinct as the tiles in a mosaic. They can decide and act based on local situational awareness. This can have an overpowering advantage as compared to going head-to-head against the enemy's similar weapons and platforms. Mosaic warfare increases the speed of decision-making and can enable commanders to mount more simultaneous actions which creates additional complexity to the decision-making of the opposing forces. The enabling technologies for the mosaic warfare concept is investigated, the experimentation environment for testing the concept is created, the preliminary discovery experiments are conducted and the results from these experiments are presented. Track Coordinator - Modeling Methodology: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET), Gabriel Wainer (Carleton University) Modeling Methodology Frameworks and Standards Chair: Adelinde Uhrmacher (University of Rostock) Towards a Unifying Framework for Modeling, Execution, Simulation, and Optimization of Resource-aware Business Processes Asvin Goel (Kühne Logistics University) Abstract Abstract This paper proposes an extension to BPMN~2.0 to be used within a framework allowing to execute and simulate resource-aware business processes. The framework consists of three core components: a data provider responsible for acquiring all relevant information, an execution engine responsible for advancing process execution according to the respective execution logic, and a controller responsible for making all decisions required during process execution. The data provider and controller can be easily replaced depending on the use case and the entire execution logic is encapsulated within the execution engine. The framework is designed in such a way that it allows any decision mechanism to be deployed ranging from manual decision making to sophisticated optimization algorithms. Creating PROV-DM Graphs from Model Databases Pia Wilsdorf and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract Documenting the provenance of the main products of a simulation study plays a crucial role in improving the understanding of mechanistic, biological models as well as their reproducibility and credibility. With model databases already an ample collection of simulation models, including metainformation and source files, exists. In this paper, we bridge the gap between the information contained in model databases and the PROV-DM provenance standard, which allows making the diverse products and their relationships formally explicit. We present a procedure for creating PROV-DM graphs from model database entries, and illustrate the approach based on ten different models from the BioModels database. These case studies demonstrate the advantages of having a standardized provenance view in addition to the regular database entries, i.e., enhanced means for visualizing the structure of the simulation study and the curation process. Seamless Simulation-Based Verification and Validation of Event-Driven Software Systems Tom Meyer, Philipp Andelfinger, Andreas Ruscheinski, and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract Verification and validation (V&V) are essential concerns in the development of safety-critical distributed software systems. V&V efforts targeting full system implementations rely on testing, which requires real-world deployments and cumbersome analysis to track down issues across distributed software components. Here, we propose a simulation-based development and testing framework for distributed systems following the event-driven architecture (EDA) paradigm. During development, unmodified software components can be executed in their interaction with a simulated environment, allowing for early testing under envisioned deployments. After introducing the interplay of EDA and discrete-event simulation, we present our framework's architecture and the API offered to software components, which closely follows accepted EDA principles. We demonstrate the use of our framework on a medical software system used in the diagnosis of rare genetic diseases. By observing the system's interaction with simulated laboratories, the feedback loop between diagnoses by laboratories and classifications from the software system is evaluated. Modeling Methodology DEVS Theory and Practice Chair: Cristina Ruiz-Martín (Carleton University) Cross-formalism Decomposition of DEVS Coupled Models Neal J. DeBuhr and Hessam S. Sarjoughian (Arizona State University) Abstract Abstract This paper proposes a cross-formalism model decomposition process such that Discrete Event System Specification (DEVS) coupled models can be automatically transformed to event graphs. We approach this process from both methodological and software implementation vantage points. A plurality of system models, from multiple modeling formalisms, may improve simulation project soft factors like collaborative model design and shared system understanding, as well as technical advantages like improved model portability. While additional research is needed to better understand the value of having multiple models, when one might otherwise suffice, well-defined and automated processes for cross-formalism modeling should facilitate the realization of this value. The choice of source and target modeling formalisms reflects the interest of the authors in investigating the role of hierarchy in these cross-formalism simulation problems and processes. Hierarchical modeling has significant overlap in technical and non-technical benefits, so it is an interesting concept to consider alongside cross-formalism modeling. Devs Model Design for Simulation Web App Deployment Laurent Capocchi and Jean Francois Santucci (UMR CNRS 6134), Bernard Zeigler (RTSync Corp.), and Johanna Fericean (University of Corsica) Abstract Abstract The discrete-event system specification (DEVS) formalism has been recognized to be able to enable formal and complete description of hybrid model components and subsystems. What is missing for accelerated adoption of DEVS-based methodology is to offer a way to design web apps to interact with a simulation model and to automatically deploy it on an on-line server which is remotely accessible from web app. The deployment of DEVS simulation models is the process of making models available in production where web applications, enterprise software and APIs can consume the simulation by providing new inputs and generating outputs. This paper proposes a framework allowing to simplify the DEVS simulation model building and deployment on the web by the modeling and simulation engineers with minimal web development knowledge. A case study concerning the management of COVID-19 epidemic supervision using a simulation web app is presented. Composable Geo-referenced Multi-resolution Multi-agent CA-based DEVS, KIB, and PDE Models Hessam Sarjoughian (Arizona State University) and Chao Zhang (Arizona State University; The inclusion of geometric abstractions elevates non-spatial automata to have the expressiveness necessary for understanding and development of natural and built systems. Indeed, the scope and types of questions asked by domain experts are continually rising due to the varied and intertwined structures and dynamics of hybrid systems. This is especially evident when heterogeneous models are needed to solve complex problems that in turn benefit from using different modeling formalisms and simulation frameworks. In this paper, an approach targeting the development of composable, heterogeneous, multi-resolution, spatiotemporal models formalized according to modular, cellular automata and multi-agent models grounded in parallel DEVS, PDE, and Geo-referenced Knowledge Interchange Broker is proposed. This approach to modeling and simulation is demonstrated by an integrated framework enabled by the DEVS-Suite and OpenModelica simulators and supported by the Functional Mock-up Interface. A multi-scale model with simulations for human breast cancer biology highlights the developed approach and framework.) Abstract Abstract Including geometry in non-spatial automata elevates their expressiveness. This provides the context required to understand many natural and built systems and facilitate their development. Indeed, the scope and types of questions asked by domain experts are continually rising due to the varied and intertwined structures and dynamics of hybrid systems. This is especially evident for heterogeneous models required to solve complex problems. They benefit from using different modeling formalisms and simulation frameworks. In this paper, an approach targeting the development of composable, heterogeneous, multi-resolution, spatiotemporal models formalized according to modular, cellular automata, and multi-agent models grounded in parallel DEVS, Modelica, and Geo-referenced Knowledge Interchange Broker methods is proposed. This approach is used to develop a co-simulation framework supported by the DEVS-Suite and OpenModelica simulators and the Functional Mock-up Interface. A multi-scale model for human breast cancer biology highlights the use of the developed approach and the co-simulation framework. Modeling Methodology Applications Chair: Neal DeBhur (Arizona State University) A Generalized Model For Modern Hierarchical Memory System Hamed Najafi (Florida International University), Xiaoyang Lu (Illinois Institute of Technology), Jason Liu (Florida International University), and Xian-He Sun (Illinois Institute of Technology) Abstract Abstract Memory system is critical to architecture design which can significantly impact application performance. Concurrent Average Memory Access Time (C-AMAT) is a model for analyzing and optimizing memory system performance using a recursive definition of the memory access latency along the memory hierarchy. The original C-AMAT model, however, does not provide the necessary granularity and flexibility for handling modern memory architectures with heterogeneous memory technologies and diverse system topology. We propose to augment C-AMAT to take into consideration the idiosyncrasies of individual cache/memory components as well as their topological arrangement in the memory architecture design. Through trace-based simulation, we validate the augmented model and examine the memory system performance with insight unavailable using the original C-AMAT model. Nonparametric Density Estimation - a Numerical Exploration Paul Evangelista and Vikram Mittal (United States Military Academy) Abstract Abstract The simulation of distributions without parametric assumptions requires direct estimation of the underlying density function from sample data. Extensive literature discusses the theoretical aspects of this problem. This paper discusses application and practical implications of nonparametric density estimation. Primarily using piece-wise linear interpolation and the Nadaraya–Watson kernel regression methods, tests and experiments show the suitability of nonparametric methods for various circumstances. Nonparametric density estimation has the potential to support complex distributions, which would enable accurate simulation in a fully-automated environment. The Use of Simulation with Machine Learning and Optimization for a Digital Twin- A Case on Formula 1 DSS Andrew Greasley and Gajanan Panchal (Aston University) and Avinash Samvedi (National University of Singapore) Abstract Abstract The implementation of a digital twin presents a challenging environment for simulation. One challenge is the need for fast execution speed to maintain synchronization with the real system. When providing predictive outcomes, the complementary use of simulation with machine learning and optimization software may be employed to achieve this aim. The article investigates the use of simulation, machine learning and optimization in terms of providing a digital twin capability. The article presents a case on Formula1 or F1 competition, where a decision support system (DSS) framework is presented to explore a digital twin capability. Modeling Methodology Methodologies Chair: Ezequiel Pecker Marcosig (UBA, CONICET) Exploiting the Levels of System Specification for Modeling of Mind Bernard Zeigler (University of Arizona, RTSync Corp.) Abstract Abstract Our objective is to show how the hierarchy of system specifications and morphisms affords a framework that supports modeling and simulation of mind/brain in a coherent manner. Such a framework provides a credible path for simulation/implementation of cognitive behavior at the level of neurons and neuronal compositions. Here we characterize this methodology and use it to extend the set of DEVS building blocks and architectural patterns within complex decision trees and advanced classifier implementations as DEVS coupled models. Such a methodology offers insight into the way real brains realize cognitive behaviors and can improve upon current hardware architecture to support simulation/implementation of neuromorphic designs that comply with physiologically based brain network properties and constraints. From Narratives to Conceptual Models via Natural Language Processing David Shuttleworth and Jose J. Padilla (Old Dominion University) Abstract Abstract This paper explores the use of natural language processing (NLP) towards the semi-automatic generation of conceptual models, and eventual simulation specifications, from descriptions of a phenomenon. Narratives describing the problem are transformed into a list of concepts and relationships and visualized using a network graph. The process relies on pattern-based grammatical rules and a NLP dependency parser identifying important concept types, namely actors, factors, and mechanisms. We use three conceptualizations, created by potential users, to understand how the NLP-generated model should and could be adjusted. The objective of the research is to develop potential standard approaches users can use to generate conceptual models; develop a conceptual modeling assistant that subject matter experts can use to make them participant in the simulation creation process; and to identify how narratives should be written so an NLP-based conceptual modeling assistant may provide a thorough description of a phenomenon. Microscopic Vehicular Traffic Simulation: Comparison of Calibration Techniques Casey Bowman (University of North Georgia) and Yulong Wang and John A. Miller (University of Georgia) Abstract Abstract Modeling the flow of traffic is an extremely important endeavor for city planners around the world. Accurate models of traffic systems are the main tool researchers require to solve traffic problems. From traffic congestion to unsafe traffic corridors to forecasting travel times, accurate traffic models are essential to improving our experience on our roads and highways. Traffic is a highly complex and dynamic system that, nevertheless, also shows clear patterns of regular behaviors, and as such appears to be a problem that is solvable. Effective traffic models need to be constructed with real-world data informing the design and must be calibrated with the data as well. This paper presents an arrival modeling technique using online data that improves upon offline arrival modeling. The calibration of traffic simulation models is also discussed, and three techniques are compared and contrasted according to accuracy and efficiency. Modeling Methodology, Simulation and AI Model Recognition and Identification Chair: Edward Y. Hua (MITRE Corporation) Can Machines Solve General Queueing Problems? Eliran Sherzer (University of Toronto), Arik Senderovich (York University), and Dmitry Krass and Opher Baron (University of Toronto) Abstract Abstract We study how well a machine can solve a general problem in queueing theory, using a neural net to predict the stationary queue-length distribution of an M/G/1 queue. This problem is, arguably, the most general queuing problem for which an analytical ``ground truth'' solution exists. We overcome two key challenges: (1) generating training data that provide ``diverse'' service time distributions, and (2) providing continuous service distributions as input to the neural net. To overcome (1), we develop an algorithm to sample phase-type service time distributions that cover a broad space of non-negative distributions; exact solutions of M/PH/1 (with phase-type service) are used for the training data. For (2) we find that using only the first n moments of the service times as inputs is sufficient to train the neural net. Our empirical results indicate that neural nets can estimate the stationary behavior of the M/G/1 extremely accurately. Simulation Based Approach for Reconfiguration and Ramp up Scenario Analysis in Factory Planning Florian Schmid (OTH Regensburg); Tobias Wild (Simplan AG); and Jan Schneidewind, Tobias Vogl, Lukas Schuhegger, and Stefan Galka (OTH Regensburg) Abstract Abstract Structural changes in production entail a potential economic risk for manufacturing companies. It is necessary to identify a suitable strategy for the reconfiguration process and to continue to meet the demand during the change in the factory structure and ramp-up phase. A simulation offers the possibility to analyze different ramp-up scenarios for the factory structure and to select a suitable concept for the reconfiguration process. A discrete event simulation approach is presented that can be used to evaluate variants of structural changes and serves as a basis for deciding on a reconfiguration strategy. This approach is demonstrated using a specific production step of a plant producing hydrogen electrolyzers, the results and generalized conclusions are discussed. Model Uncertainty and Robust Simulations Robust Simulation Optimization Chair: Sara Shashaani (North Carolina State University) Robust Simulation Optimization with Stratification Pranav Jain and Sara Shashaani (North Carolina State University) and Eunshin Byon (University of Michigan) Abstract Abstract Stratification has been widely used as a variance reduction technique when estimating a simulation output, whereby the input variates are generated following a stratified sampling rule from previously determined strata. This study shows that an adaptive sampling class of simulation optimization solvers called ASTRO-DF could become more robust with stratification, S-ASTRO-DF. For a simulation optimization algorithm, we discuss how to monitor the robustness in terms of bias and variance of the outcome and introduce several metrics to compute and compare the robustness of solvers. We find that while stratified sampling improves the algorithm’s performance, its robustness is sensitive to the stratification structure. In particular, as the number of strata increases, the stratified sampling-based algorithms may become less effective. Optimizing Input Data Acquisition for Ranking and Selection: A View Through the Most Probable Best Taeho Kim and Eunhye Song (Georgia Institute of Technology) Abstract Abstract This paper concerns a Bayesian ranking and selection (R&S) problem under input uncertainty when all solutions are simulated with common input models estimated from data. We assume that there are multiple independent input data sources from which additional data can be collected at a cost to reduce input uncertainty. To optimize input data acquisition, we first show that the most probable best (MPB)—the solution with the largest posterior probability of being optimal (posterior preference)—is a strongly consistent estimator for the real-world optimum. We investigate the optimal asymptotic static sampling ratios from the input data sources that maximizes the exponential convergence rate of the MPB’s posterior preference. We then create a sequential sampling rule that balances the simulation and input data collection effort. The proposed algorithm stops with posterior confidence in the solution quality. Admission Control In the Presence of Arrival Forecasts with Blocking-Based Policy Optimization Karthyek Murthy (SIngapore University of Technology and Design) and Divya Padmanabhan and Satyanath Bhat (Indian Institute of Technology Goa) Abstract Abstract This paper presents a simulation based policy optimization scheme for performing queuing admission control in the presence of noisy arrival forecasts. The forecast models considered for arrival times go beyond the “no noise” and “no show” forecasts models treated in the literature and incorporate realistic features such as decreasing accuracy profile for jobs arriving farther in future. Assuming access to forecasts for arrivals within a look-ahead window, the paper proposes optimization over a policy class which approximates combinations of threshold and blocking type policies in the literature. While threshold policies tend to be optimal for admission control problems without forecasts, blocking policies have been effective in settings where exact arrival data is known. Exact knowledge on future arrivals is however unrealistic and a key novelty is the use of robust optimization to compute blocking policy statistics. Numerical experiments demonstrate significant reductions in waiting costs achievable by incorporating forecast data. Model Uncertainty and Robust Simulations Decision Making under Input Uncertainty Chair: Wei Xie (Northeastern University) Sequential Importance Sampling for Hybrid Model Bayesian Inference to Support Bioprocess Mechanism Learning and Robust Control Wei Xie, Keqi Wang, and Hua Zheng (Northeastern University) and Ben Feng (University of Waterloo) Abstract Abstract Driven by the critical needs of biomanufacturing 4.0, we introduce a probabilistic knowledge graph hybrid model characterizing the risk- and science-based understanding of bioprocess mechanisms. It can faithfully capture the important properties, including nonlinear reactions, partially observed state, and nonstationary dynamics. Given very limited real process observations, we derive a posterior distribution quantifying model estimation uncertainty. To avoid the evaluation of intractable likelihoods, Approximate Bayesian Computation sampling with Sequential Monte Carlo (ABC-SMC) is utilized to approximate the posterior distribution. Given high stochastic and model uncertainties, it is computationally expensive to match output trajectories. Therefore, we create a linear Gaussian dynamic Bayesian network (LG-DBN) auxiliary likelihood-based ABC-SMC approach. Through matching the summary statistics driven through a LG-DBN likelihood that can provide a high fidelity to capture complex bioprocess mechanisms, dynamics, and variations, the proposed algorithm can accelerate posterior approximation convergence, support process monitoring, and facilitate robust control. Distributionally Robust Optimization for Input Model Uncertainty in Simulation-Based Decision Making Soumyadip Ghosh and Mark Squillante (IBM Research) Abstract Abstract We consider a new approach to solve distributionally robust optimization formulations that address nonparametric input model uncertainty in simulation-based decision making problems. Our approach for the minimax formulations applies stochastic gradient descent to the outer minimization problem and efficiently estimates the gradient of the inner maximization problem through multi-level Monte Carlo randomization. Leveraging theoretical results that shed light on why standard gradient estimators fail, we establish the optimal parameterization of the gradient estimators of our approach that balances a fundamental tradeoff between computation time and statistical variance. We apply our approach to nonconvex portfolio choice modeling under cumulative prospect theory, where numerical experiments demonstrate the significant benefits of this approach over previous related work. Better Safe Than Sorry - An Evaluation Framework For Simulation-Based Theory Construction Marvin Auf der Landwehr, Maik Trott, Maylin Wartenberg, and Christoph von Viebahn (Hochschule Hannover) Abstract Abstract Simulation constitutes a fundamental methodology that is commonly used to analyze the dynamic and emergent behaviors of complex systems. To increase faith in simulation-based theory-building, research needs to be able to demonstrate and evaluate its validity by means of true findings and correct recommendations. However, simulation credibility is pluralist, which means there are different forms of validity depending on context and domain. Hence, based on a systematic literature analysis, this study seeks to dissect simulation evaluation routines and rationales in scientific research. Having critically synthesized 1,609 articles on simulation-based theory-building, we describe the methods used to evaluate simulations. Finally, based on the literature insights, we compile two evaluation frameworks that enable researchers to plan multifarious evaluation episodes and advance a pluralist approach to simulation evaluation. Model Uncertainty and Robust Simulations Uncertainty Quantification Chair: Zeyu Zheng (University of California, Berkeley) Cheap Bootstrap for Input Uncertainty Quantification Henry Lam (Columbia University) Abstract Abstract When a simulation model contains input distributions that need to be calibrated from external data, proper simulation output analysis needs to account for not only the noises from the Monte Carlo sample generation, but also the statistical noises from these input data. The latter issue is known commonly as the input uncertainty in the literature. An array of methods have been proposed to address input uncertainty, but one recurrent challenge faced by many of these methods is the demanding simulation load. In this paper, we present a method, based on a sort of modified bootstrap, to handle input uncertainty with multiplicatively less computation than these existing methods. In particular, this "cheap" bootstrap is able to construct confidence intervals that account for both input data and Monte Carlo noise efficiently by substantially reducing the number of outer samples in a nested procedure. Distributional Discrimination Using Kolmogorov-Smirnov Statistics and Kullback-Leibler Divergence for gamma, log-normal, and Weibull distributions. Mario Andriulli (Center for Data Analysis and Statistic, United States Military Academy) and James K. Starling and Blake Schwartz (Center for Data Analysis and Statistic) Abstract Abstract This research compares two methods of choosing a distribution to match sample data: Kullback-Leibler (KL) divergence and the Kolmogorov-Smirnoff (KS) statistic. We generate sample data from a known distribution (we used the gamma, log-normal, and Weibull distributions), find best matches to the data for each candidate distribution using maximum likelihood parameter estimation, then use KL divergence and the KS statistic to choose a best fit for the data. Using Monte-Carlo simulation, we estimate a probability of correct selection for KL divergence and the KS statistic by determining how frequently each method correctly selects the known underlying distribution. Results vary based on the data-generating distribution type, parameters, and sample size, but we find that KL divergence generally outperforms the KS statistic except in a few rare instances. This is an important result, as the two measures are not directly comparable, and are competing methods for measuring the distance between two distributions. Combining Numerical Linear Algebra with Simulation to Compute Stationary Distributions Zeyu Zheng (UC Berkeley) and Alex Infanger and Peter Glynn (Stanford University) Abstract Abstract This paper introduces the first fully integrated algorithm for combining simulation with numerical linear algebra, as a means of computing stationary distributions for Markov chains and Markov jump processes. We use linear algebra to analyze the ``center" of the state space, while simulation is used to estimate contributions to the steady-state from path excursions outside the "center". The method yields consistent estimators for stationary expectations, and can be viewed as an application of the variance reduction technique known as conditional Monte Carlo. Financial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) A Proximal Algorithm for Sampling from Non-smooth Potentials Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) and Yongxin Chen (Georgia Institute of Technology) Abstract Abstract In this work, we examine sampling problems with non-smooth potentials and propose a novel Markov chain Monte Carlo algorithm for it. We provide a non-asymptotical analysis of our algorithm and establish a polynomial-time complexity $\tilde {\cal O}(M^2 d {\cal M}_4^{1/2} \varepsilon^{-1})$ to achieve $\varepsilon$ error in terms of total variation distance to a log-concave target density with 4th moment ${\cal M}_4$ and $M$-Lipschitz potential, better than most existing results under the same assumptions. Our method is based on the proximal bundle method and an alternating sampling framework. The latter framework requires the so-called restricted Gaussian oracle, which can be viewed as a sampling counterpart of the proximal mapping in convex optimization. One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant. A Dynamic Credibility Model with Self-excitation and Exponential Decay Himchan Jeong (Simon Fraser University) and BIN ZOU (University of Connecticut) Abstract Abstract This paper proposes a dynamic credibility model for claim count that extends the benchmark Poisson generalized linear models (GLM) by incorporating self-excitation and exponential decay features from Hawkes processes. Under the proposed model, a recent claim has a bigger impact on the credibility premium than an outdated claim. Empirical results show that the proposed model outperforms the Poisson GLM in both in-sample goodness-of-fit and out-of-sample prediction. A Computational Study of Probabilistic Branch and Bound with Multilevel Importance Sampling Hao Huang (Yuan Ze University); Pariyakorn Maneekul, Danielle F. Morey, and Zelda B. Zabinsky (University of Washington); and Giulia Pedrielli (Arizona State University) Abstract Abstract Probabilistic branch and bound (PBnB) is a partition-based algorithm developed for level set approximation, where investigating all subregions simultaneously is a computational costly sampling scheme. In this study, we hypothesize that focusing branching and sampling on promising subregions will improve the efficiency of the PBnB algorithm. Two variations to Original PBnB are proposed, Multilevel PBnB and Multilevel PBnB with Importance Sampling. Multilevel PBnB focuses its branching on promising subregions that are likely to be maintained or pruned, as opposed to Original PBnB that branches more subregions. Multilevel PBnB with Importance Sampling attempts to further improve this efficiently by combining focused branching with a posterior distribution that updates iteratively. We present numerical experiments using benchmark functions to compare the performance of each PBnB variation. Track Coordinator - Professional Development Track: Weiwei Chen (Rutgers University), Seong-Hee Kim (Georgia Institute of Technology) Professional Development Survive and Thrive in Different Academic Systems: A Simulation Perspective Chair: Weiwei Chen (Rutgers University); Seong-Hee Kim (Georgia Institute of Technology) Weiwei Chen (Rutgers University), L. Jeff Hong (Fudan University), Seong-Hee Kim (Georgia Institute of Technology), Sanja Lazarova-Molnar (Karlsruhe Institute of Technology), Szu Hui Ng (National University of Singapore), and Susan Sanchez (Naval Postgraduate School) Abstract Abstract This panel discussion is on the hiring and tenure process in different academic systems, with a focus on Asian universities. Specifically, the panel discusses what to expect when applying for a faculty position in the Asian, European, and American university systems after completing a Ph.D. degree and how to survive as a (possibly the only simulation) junior faculty member in the department. Professional Development The Role of Simulation in Industry Chair: Weiwei Chen (Rutgers University); Seong-Hee Kim (Georgia Institute of Technology) Bahar Biller (SAS Institute, Inc); Weiwei Chen (Rutgers University); Peter Frazier (Cornell University); Allen Greenwood (FlexSim Software Products, Inc.); Shane Henderson (Cornell University); and Seong-Hee Kim (Georgia Institute of Technology) Abstract Abstract This panel discusses the role of simulation research/researchers in industry and for what type of work industry hires Ph.D. or M.S. students in the simulation field. The panelists include simulation experts in industry, and university professors who have had rich working/consulting experience in the industry. This panel also aims to give an insight into what to expect when searching for a job in industry and working in industry as an engineer with advanced degrees in the simulation field. Track Coordinator - Project Management and Construction: Jing Du (University of Florida), Joseph Louis (Oregon State University) Project Management and Construction Data-driven Simulation for Construction Chair: Changbum Ahn (Seoul National University) Constructing an Audio Dataset of Construction Equipment from Online Sources for Audio-based Recognition Gilsu Jeong, Changbum Ahn, and Moonseo Park (Seoul National University) Abstract Abstract Monitoring equipment and constructing activity data in construction sites are essential to obtain reliable decision-making through simulation models. The audio-based equipment monitoring could provide critical information about the work process and site conditions. Although a large-scale dataset is essential for audio-based activity recognition, it is time consuming and labor intensive to collect data on site. Therefore, this study proposes a framework for constructing an audio dataset of equipment from online sources. The framework involved selecting appropriate audio using machine learning algorithms, audio denoising, and audio separation models. The validity of the constructed dataset was examined with six classifiers and compared with the benchmark models constructed using real-world equipment audio. The classification results provided 64%–93% accuracy, which demonstrates that the constructed dataset using the proposed framework is effective in recognizing real-world sounds. The outcomes are anticipated to improve audio-based activity recognition processes, potentially helping to monitor equipment productivity. Real-time Activity Duration Extraction of Crane Works for Data-driven Discrete Event Simulation Manuel Jungmann, Lucian Ungureanu, and Timo Hartmann (Technische Universität Berlin) and Hector Posada and Rolando Chacon (Universitat Politècnica de Catalunya) Abstract Abstract The construction industry is struggling with low productivity rates because of a low level of digitalization, dynamic interactions, and uncontrollable circumstances on sites, which make the planning process complex. Usage of the digital twin construction paradigm enables to facilitate construction management and leverage the sector’s unexploited potential. This research addresses current shortcomings by real-time discrete event simulation. During crane operations, kinematic data were collected, which were classified by machine learning algorithms for activity recognition and duration extraction. Based on the identified durations, Goodness-of-Fit techniques determined suitable probability density functions. The resulting probability density functions were used as input parameters in stochastic discrete event simulations. It was shown that with enriched data collection, probability density functions have to be updated. The data-driven discrete event simulation facilitates decision-making processes by providing more reliable real-time information for the planning of upcoming construction works. Thus, data-based instead of experience-based management can be enabled. Reinforcement Learning-Based Transportation and Sway Suppression Methods for Gantry Cranes in Simulated Environment Namkyoun Kim, Minhyuk Jung, Inseok Yoon, Moonseo Park, and Changbum Ryan Ahn (Seoul National University) Abstract Abstract To improve the productivity and safety of cranes, deep reinforcement learning (DRL) has received widespread attention as a framework for developing automated control methods. However, the major challenge of DRL is sample efficiency, which is further exacerbated by the operational and kinematic characteristics of the crane. Our study proposes an approach to improve the sample efficiency in training control policies for two subtasks: horizontal transportation and sway suppression. To do this, we built a simulation environment and defined the state of the environment and the reward. Then, we performed experiments to find out whether three DRL techniques (reward shaping, curriculum learning, and generative adversarial imitation learning) can mitigate the sample efficiency degradation caused by operational and kinematic characteristics. The results show that the techniques used in our experiment are effective in the improvement of the sample efficiency and learning performance of the DRL model for crane operation. Project Management and Construction Simulation for Construction and Infrastructure Management Chair: Jinwoo Kim (University of Michigan) Construction Image Synthetization to Overcome a Small, Biased Real Training Dataset for DNN-Powered Visual Scene Understanding Jinwoo Kim (Nanyang Technological University), Daeho Kim (University of Toronto), and SangHyun Lee (University of Michigan) Abstract Abstract Deep neural networks (DNNs) have become a driving factor of visual scene understanding. However, the shortage of construction training images has been a major barrier to fully leverage its maximum performance potential. To address this issue, we investigate the effectiveness of synthetic images on DNN training in a common real-world scenario where only a small, biased real training image dataset is available. To this end, we synthetize numerous construction training images and conduct a DNN training experiment in real construction settings. Results show that the combined dataset-trained model always outperforms the one trained with only a small, biased real dataset. This finding indicates that an image synthetization approach has promising potential to enhance a given real training dataset in terms of data quantity and diversity. Image synthetization with automated labeling will mitigate the training image shortage, contributing to the development of more accurate and scalable DNNs for construction scene understanding. A Tale of Three Simulations for Project Managers Sanjay Jain (The George Washington University) Abstract Abstract Simulations are a good way for project managers to assess the impact of uncertainties on project plans and subsequent execution. Three types of simulations have been used in literature and to a lesser extent in practice for such purpose, Monte Carlo, discrete event, and systems dynamics. It behooves the project managers to understand the applicability of these different simulations and associated advantages and disadvantages. This paper presents the approach to build a systems dynamics model based on a project plan and compares the learning with those from Monte Carlo and discrete event simulations of the same project plan. The paper will help analysts focused on single type of simulation in the past to appreciate the capabilities of alternate approaches. It will also help practicing project managers to appreciate the effort involved, the analysis generated from the three simulations, and factors to determine which one to employ based on their objectives. Discrete Event Simulation For Port Berth Maintenance Planning Ruqayah Alsayed Ebrahim, Shivanan Singh, Yitong Li, and Wenying Ji (George Mason University) Abstract Abstract Industrial and commercial ports, which are one of the three main hubs to the country, require 24/7 operations to maintain the goods export and import flow. Due to the aging and weather factors, berths require regular maintenance, such as replacing old piles, timber finders, marine ladders, rubber fenders, and deck slabs. For efficient berth maintenance, strategies are highly desired to minimize or eliminate any delays in operations during the maintenance. This paper develops a discrete event simulation model using Simphony.NET for berth maintenance processes in Doha Port, Kuwait. The model derives minimum maintenance duration under limited resources and associated uncertainties. The model can be used as a decision support tool to minimize interruption or delays in the port maintenance operations. Project Management and Construction Computer Vision and Ranging for Simulation Chair: Jinwoo Kim (University of Michigan) Road User Localization for Autonomous Vehicle Infrastructure by Leveraging Surveillance Videos Linjun Lu and Fei Dai (West Virginia University) Abstract Abstract The emergence of autonomous vehicles (AVs) provides a sustainable solution to reshape the current transportation system and help mitigate the negative environmental impacts from transportation activities. However, the AVs may undergo unreliable and insufficient perception by the onboard sensors due to the occlusion issues and complex traffic conditions, especially in crowded urban intersections. This study proposed a vision-based method for automated detection and localization of road users in traffic scenarios by leveraging surveillance videos. In this method, the traffic scenario is surveilled in a large visual range and the locations of all surveilled road users are determined. The field experiment was conducted to evaluate the performance of the proposed method. The experiment results demonstrated the promising accuracy of the proposed method for road user location estimation and its potential to provide the AVs with a full-participant perception in complex traffic scenarios and assist them to make the right driving decisions. Urban Subsurface Mapping Via Deep Learning Based GPR Data Inversion Mengjun Wang, Da Hu, and Shuai Li (The University of Tennessee) and Jiannan Cai (The University of Texas at San Antonio) Abstract Abstract Accurate mapping of urban subsurface is essential for managing urban underground infrastructure and preventing excavation accidents. Ground-penetrating radar (GPR) is a non-destructive test method that has been used extensively to locate underground utilities. However, existing approaches are not able to retrieve detailed underground utility information (e.g., material and dimensions) from GPR scans. This research aims to automatically detect and characterize buried utilities with location, dimension, and material by processing GPR scans. To achieve this aim, a method for inverting GPR data based on deep learning has been developed to directly reconstruct the permittivity maps of cross-sectional profiles of subsurface structure from the corresponding GPR scans. A large number of synthetic GPR scans with ground-truth permittivity labels were generated to train the inversion network. The experiment results indicated that the proposed method achieved a Mean Absolute Error of 0.53, a Structural Similarity Index Measure of 0.91, and an R2 of 0.96. System Dynamics Modeling of the Construction Supply Chain in Industrial Modularized Construction Projects Lingzi Wu (University of Alberta), Kunkun Li (PCL Industrial Management Inc.), and Simaan AbouRizk (University of Alberta) Abstract Abstract Modeling the construction supply chain has been a challenge as the construction supply chain is a complex and dynamic ecosystem. To understand the variable and volatile nature of construction, this study developed a system dynamic model to simulate the influences of three key factors, scope changes, requests for information, and rework, on project duration. This study reviewed the latest literature, examined the typical modularized heavy industrial construction projects, sketched a causal loop diagram, developed a system dynamics model, and performed model verification and validation. The simulation results for a simple construction project with artificial input revealed that the three identified factors significantly influenced the project duration against the initial planned project duration. The proposed system dynamics model (1) simulates the multi-stakeholder construction supply chain as a holistic ecosystem; (2) quantifies the impact resulting from inefficient information flows on project duration; and (3) forecasts the project duration given these factors. Project Management and Construction Machine Learning for Simulation in Construction Chair: Yitong Li (George Mason University) Accelerating Training Of Reinforcement Learning-Based Construction Robots In Simulation Using Demonstrations Collected In Virtual Reality Lei Huang and Zhengbo Zou (The University of British Columbia) Abstract Abstract The application of construction robots is crucial to mitigate challenges faced by the construction industry, such as labor shortages and low productivity. Reinforcement learning (RL) enables robots to take actions based on observed states, improving flexibility over traditional robots pre-programmed to follow determined sequences of instructions. However, RL-based control is time-consuming to train, hindering the wide adoption of RL-based construction robots. This paper proposes an approach that utilizes expert demonstrations collected from virtual reality to accelerate the RL training of construction robots. For evaluation, we implement the approach for the task of window pickup and installation on a virtual construction site. In our experiment, out of 10 RL agents trained using virtual expert demonstrations, 7 agents converge to an optimal policy faster than the baseline RL agent trained without demonstrations by around 40 epochs, which proves adding expert demonstrations can effectively accelerate the training of robots learning construction tasks. Field-Based Assessment of Joint Motions in Construction Tasks with and without Exoskeletons in Support of Worker-Exoskeleton Partnership Modeling and Simulation Sean Tyler Bennett and Peter Gabriel Adamczyk (University of Wisconsin-Madison); Fei Dai (West Virginia University); and Michael Wehner, Dharmaraj Veeramani, and Zhenhua Zhu (University of Wisconsin-Madison) Abstract Abstract Construction workers are required to perform repetitive, physically demanding manual handling work which may pose severe risks for work-related musculoskeletal disorders and injuries. Exoskeletons have substantial potential to protect worker safety and well-being and increase construction productivity by augmenting and complementing workers’ physical abilities. However, a key barrier for their adoption in construction is the lack of rigorous research-based real-world evaluation of the beneficial impact of exoskeleton use in practice. As the foundational step to address this issue, this paper presents a field-based assessment to reconstruct and analyze worker joint motions with and without back- and shoulder-assisted exoskeletons in two construction tasks: pushing/emptying gondolas and installing/removing wooden blocks between steel studs. The findings from this research will inform future investigations on worker-exoskeleton partnership to model and simulate the impact on worker and work performance from various perspectives including biomechanics, ergonomics, productivity, and profitability. Automated Integration of Infrastructure Component Status for Real-Time Restoration Progress Control: Case Study of Highway System in Hurricane Harvey Yitong Li, Fengxiu Zhang, and Wenying Ji (George Mason University) Abstract Abstract Following extreme events, efficient restoration of infrastructure systems is critical to sustaining community lifelines. During the process, effective monitoring and control of the infrastructure restoration progress is critical. This research proposes a systematic approach that automatically integrates component-level restoration status to achieve real-time forecasting of overall infrastructure restoration progress. In this research, the approach is mainly designed for transportation infrastructure restoration following Hurricane Harvey. In detail, the component-level restoration status is linked to the restoration progress forecasting through network modeling and earned value method. Once the new component restoration status is collected, the information is automatically integrated to update the overall restoration progress forecasting. Academically, an approach is proposed to automatically transform the component-level restoration information to overall restoration progress. In practice, the approach expects to ease the communication and coordination efforts between emergency managers, thereby facilitating timely identification and resolution of issues for rapid infrastructure restoration Track Coordinator - Reliability Modeling and Simulation: Sanja Lazarova-Molnar (University of Southern Denmark, Karlsruhe Institute of Technology), Xueping Li (University of Tennessee), Olufemi Omitaomu (Oak Ridge National Laboratory) Reliability Modeling and Simulation Reliability Modeling and Simulation I Chair: Shima Mohebbi (George Mason University) Spatial Agent-based Simulation of Connected and Autonomous Vehicles to Assess Impacts on Traffic Conditions Shima Mohebbi and Pavithra Sripathanallur Murali (George Mason University) Abstract Abstract Traffic congestion and its effect on aging transportation infrastructures have been a significant issue in many cities. Various policies such as fast-track lanes have been applied to optimize traffic on roadways. However, the increasing adoption of Connected and Autonomous Vehicles (CAVs) motivates the question of whether they can reduce traffic congestion. This study aims to evaluate the integration of CAVs into existing transportation networks, comprising of both highway and urban roads. To quantify their impact, we develop and validate agent-based simulation models. Two study sites in the State of Oklahoma were identified. We then implemented connected cruise control, green light optimized speed advisory, dynamic route selection, and pedestrians’ detection as behaviors for CAVs in the simulation. The results indicated that introducing CAVs to the selected road networks improved the traffic flow by more than 30% and 20% of the average travel time for the urban and highway study sites, respectively. Simulation as a Soft Digital Twin for Maintenance Reliability Operations Xueping Li, Thomas Berg, Gerald Leon Jones, and Kimon Swanson (The University of Tennessee, Knoxville) and Vincent Lamberti, Samantha L. Okowita, Luke Birt, and Pugazenthi Atchayagopal (Consolidated Nuclear Security, LLC Y-12 National Security Complex) Abstract Abstract This paper lays out a framework for using modeling and simulation to build a "soft digital twin" (SDT) and demonstrates its efficacy by modeling the maintenance task processes within a real-world mission-critical facility. Such crucial infrastructure systems, such as nuclear power plants, rely on the efficient completion of maintenance tasks (MTs) to achieve high reliability with minimal downtime. Managers often wish for a "crystal ball" that can enable them to configure their resources optimally. The proposed SDT can serve as a "crystal ball" to gain insights into resource configurations, staff planning, and task scheduling, thanks to the versatility of simulation. This feasibility study uses both a data-driven framework and stochastic modeling methods to construct the SDT. Initial results indicate the SDT can produce reliable results comparable to the test dataset retrospectively, and it can be used to minimize the time in system of the MTs, while maximizing the throughput. Gaussian Process Model for a Water Cooled Centrifugal Chiller Using Both Manufacturer’s and Operation Data Young-Sub Kim and Cheol-Soo Park (Seoul National University, Department of Architecture and Architectural Engineering) and Eiji Urabe, Jeonghun Gwak, and Yongsung Park (Samsung C&T Corp., High-Tech ENG Team) Abstract Abstract This paper reports a hybrid Gaussian process (GP) model of a chiller system developed with both the manufacturer’s and operation data. The authors collected actual operation data of the chiller in an existing building at the sampling time of five seconds during two months. It is presented that the GP model based on both manufacturer’s and operation data performs far better than a manufacturer’s data model or an operation data model. Reliability Modeling and Simulation Reliability Modeling and Simulation II Chair: Olufemi Omitaomu (Oak Ridge National Laboratory); Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Quantifying Error Propagation In Multi-stage Perception System Of Autonomous Vehicles Via Physics-based Simulation Fenglian Pan, Yinwei Zhang, Larry Head, and Jian Liu (University of Arizona) and Maria Elli and Ignacio Alvarez (Intel Corporation) Abstract Abstract Ensuring the safety of autonomous vehicle (AV) relies on accurate prediction of error occurrences in its perception system. Due to the inter-stage functional dependence, the error occurred at certain stage may be propagated to the following stage and generate extra errors. To quantify the error propagation, this paper adopts the physics-based simulation, which enables fault injection at different stages of an AV perception system to generate error event data for error propagation modeling. A multi-stage Hawkes process(MSHP) is proposed to predict the error occurrences in each stage, with error propagation represented as a latent triggering mechanism. With explicitly considering the error propagation mechanism, the proposed outperforms benchmark methods in predicting error occurrence in a physics-based simulation of a multistage AV perception system. The proposed two-step likelihood-based algorithm accurately estimates the model coefficients in a numerical simulation case study. Proxel-based Simulation of Fault Trees in R Parisa Niloofar (University of Southern Denmark), Hossein Haghbin (Persian Gulf University), and Sanja Lazarova-Molnar (University of Southern Denmark) Abstract Abstract Simulation is an advantageous solution to calculate reliability of complex systems when analytical methods are not able to deliver results in time or when we do not have the resources to apply analytical methods. Proxel-based simulation has shown to be very efficient in reliability analysis of complex systems. In this paper, we present a package called ftaproxim for proxel-based simulation of complex systems using fault trees, developed in the R programming language. ftaproxim can calculate and plot instantaneous unavailabilities of repairable events and the system as a whole. Data-driven Reliability Modeling of Smart Manufacturing Systems using Process Mining Jonas Friederich (University of Southern Denmark) and Sanja Lazarova-Molnar (Karlsruhe Institute of Technology) Abstract Abstract Accurate reliability modeling and assessment of manufacturing systems leads to lower maintenance costs and higher profits. However, the complexity of modern Smart Manufacturing Systems poses a challenge to traditional expert-driven reliability modeling techniques. The growing research field of data-driven reliability modeling seeks to harness the abundance of data from such systems to improve and automate the reliability modeling processes. In this paper, we propose the use of Process Mining techniques to support the extraction of reliability models from event data generated in Smart Manufacturing Systems. More specifically, we extract a stochastic Petri net which can be used to analyze the overall system reliability as well as to test new system configurations. We demonstrate our approach with an illustrative case study of a flow shop manufacturing system with parallel operations. The results indicate, that using Process Mining techniques to extract accurate reliability models is feasible. Reliability Modeling and Simulation Reliability Modeling and Simulation III Chair: Xueping Li (University of Tennessee); Parisa Niloofar (SDU) Uncertainty and Sensitivity Analyses on Solar Heat Gain Coefficient of a Glazing System with External Venetian Blind Jeong-Yun Lee, Young-Sub Kim, and Cheol-Soo Park (Seoul National University) Abstract Abstract This study compares the static vs. dynamic solar heat gain coefficient (SHGC) of a glazing system with an external venetian blind. For many engineering applications, static SHGC has been still widely used. The authors aimed to investigate the difference between the two SHGCs (static vs. dynamic). For this purpose, “pyWinCalc”, developed by US LBNL was employed to simulate the dynamic thermal behavior of the system. The Sobol sampling was conducted for uncertainty and sensitivity analyses of the SHGC. It was found that the variation in SHGC depending on the slat angles as well as uncertainty in SHGC is significant. Role of Simulation Tool in National Building Energy Rating system Seung-Ju Lee, Young-Seo Yoo, Chul-Hong Park, and Cheol-Soo Park (Seoul National University) Abstract Abstract In this paper, the authors present a simulation case study to show how sensitivity analysis can be beneficially used for objective building performance assessment. The case study was made by comparing the existing prescriptive building energy rating system (weights-based) in South Korea with a new sensitivity-based performance rating (can be regarded as performance-based). For this purpose, a surrogate model of EnergyPlus, one of the most advanced dynamic simulation tools, was developed and then used for Sobol analysis. By substituting sensitivity indices for the weights, the existing system was improved from R2 of 0.06% (existing) to R2 of 89.3% (new) between the rating score and EnergyPlus simulation results. Quantified Performance Gap between Simulated vs. Actual Energy Use of Buildings Young-Seo Yoo and Cheol-Soo Park (Seoul National University) Abstract Abstract It has been widely acknowledged that building energy simulation tools play a role for optimal building energy design and accurate energy prediction. However, building’s thermal behavior is affected by uncertain factors (e.g. indoor and outdoor environments, occupant behavior, and simulation parameters) and thus the performance gap between predicted vs. actual energy use is non-negligible. This paper presents the quantified performance gap for 152 commercial and educational buildings in South Korea. It is concluded that the performance gap is significant and more efforts must be made for quantification of the uncertainty and stochastic decision making. Track Coordinator - Scientific Applications: Rafael Mayo-García (CIEMAT), Esteban Mocskos (CSC-CONICET, University of Buenos Aires (AR)) Scientific Applications Scientific Applications I Chair: Rafael Mayo-García (CIEMAT) Covid-19 Suppression Using a Testing/Quarantine Strategy: a Multi-paradigm Simulation Approach Based on a Seirtq Compartmental Model Samuel Ropert (Centro Ciencia & Vida, Fundacion Ciencia para la Vida; Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián); Alejandro Bernardin (Centro Ciencia & Vida, Fundacion Ciencia para la Vida); and Tomas Perez-Acle (Centro Ciencia & Vida, Fundacion Ciencia para la Vida; Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián) Abstract Abstract During the current COVID-19 pandemic, non-pharmaceutical interventions represent the first-line of defense to tackle the dispersion of the disease. One of the main non-pharmaceutical interventions is testing, which consists on the application of clinical tests aiming to detect and quarantine infected people. Here, we extended the SEIR compartmental model into a SEIRTQ model, adding new states representing the testing (T) and quarantine (Q) dynamics. In doing so, we have characterized the effects of a set of testing and quarantine strategies using a multi-paradigm approach, based on ordinary differential equations and agent based modelling. Our simulations suggest that iterative testing over 10% of the population could effectively suppress the spread of COVID-19 when testing results are delivered within 1 day. Under these conditions, a reduction of at least 95% of the infected individuals can be achieved, along with a drastic reduction in the number of super-spreaders. Characteristics of Simulation: A Meta-Review of Modern Simulation Applications Hendrik van der Valk, Stephanie Winkelmann, Felix Ramge, Joachim Hunker, Katharina Langenbach, and Markus Rabe (TU Dortmund University) Abstract Abstract Simulation studies enable practitioners and researchers to prove assumptions and hypotheses. Through experiments, they can analyze real-world and conceptual systems. Hence, simulation is an integral part of industrial and scientific work. Nevertheless, simulation applications have to adapt to modern, digitized working changes. As simulation evolves analogously to the industrial world, the scientific world must adjust accordingly, and new research streams for the next steps of simulation's evolution must be defined. This work aims at gathering and exhibiting the properties of recent simulation studies. It provides the groundwork for the definition of research streams for the future of simulation. The paper lays the foundation for prescriptive design knowledge on simulation studies through a structured literature review. Thus, researchers and practitioners are enabled to take on the current challenges of simulation based on a descriptive up-to-date data basis. GPU-Accelerated Simulation Ensembles of Stochastic Reaction Networks Till Köster, Leon Herrmann, Philipp Andelfinger, and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract Stochastic Simulation Algorithms are widely used for simulating reaction networks in cellular biology. Due to the stochastic nature of models and the large parameter spaces involved, many simulation runs are frequently needed. We approach the computational challenge by expanding the hardware used for execution by massively parallel graphical-processing-units (GPUs) to execute these ensembles of runs concurrently in a form of coarse-grained parallelization. We employ state-of-the-art algorithms to study the degree to which GPUs can augment the computation resources available for ensemble studies. Furthermore, the challenge of efficient work assignment given the GPU's synchronous mode of execution is explored. There are several algorithmic tradeoffs to consider for models with different execution characteristics, which we investigate in a performance study across four different models. Our results indicate that for some models adding a typical desktop GPU has a similar effect on performance as up to 40 added CPU cores. Scientific Applications Scientific Applications II Chair: Rafael Mayo-García (CIEMAT) Design and Deployment of a Simulation Platform: Case Study of an Agent-Based Model for Youth Suicide Prevention Joshua J. Huddleston, Michael C. Galgoczy, Kareem Ghumrawi, and Philippe J. Giabbanelli (Miami University) and Ketra L. Rice, Nisha Nataraj, Margaret M. Brown, Christopher R. Harper, and Curtis S. Florence (Centers for Disease Control and Prevention) Abstract Abstract Research has examined the process of engaging end-users in co-designing a simulation model and using it within workplace settings. In contrast, few studies document the software development aspect of creating, packaging, and deploying a simulation platform that satisfies end-users’ needs. In this case study, we detail these aspects regarding a platform involving Agent-Based Modeling for youth suicide prevention. Goals and constraints are detailed in three categories of needs, each showing how competing demands are addressed via software solutions: 1) data, encompassing data security (e.g., no direct access to individual-level data) and data updates (e.g., changing a model’s initialization without coding expertise); 2) deployment, to satisfy security protocols (e.g., secure intranet communications) and the end-users' experience (e.g., ease of installation); 3) accessibility, ensuring that individuals with impairments are not excluded from using simulations. By presenting practical solutions to each category, our case study supports modelers in addressing their own deployment needs. py2PowerDEVS: Construction and Manipulation of Large Complex Structures for PowerDEVS Models Via Python Scripting Ezequiel Pecker-Marcosig, Matías Alejandro Bonaventura, Esteban Lanzarotti, Lucio Santi, and Rodrigo Daniel Castro (Departamento de Computación, Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)) Abstract Abstract As the disciplines of modeling and simulation evolve and become more efficient, the complexity of the scientific applications that can be tackled by simulation modeling continue to increase. The approach of building complex models through the composition and interconnection of modular units of behavior has been a key factor in this success. However, very often complexity entails a significant growth in the size and intricacy on the structure of simulation models. In this paper we confer new capabilities for building models with large complex structures to PowerDEVS, an established C++-based simulation toolkit for the DEVS formalism, that has typically based its modular modeling experience on a GUI. We present py2PowerDEVS, a Python framework that seamlessly integrates pre-built modular PowerDEVS components into the powerful and growing ecosystem of Python scripting, enabling the algorithmic design of large complex PowerDEVS model structures. We demonstrate the use of py2PowerDEVS in three scientific application domains. Track Coordinator - Simulation and AI: Edward Y. Hua (MITRE Corporation), Yijie Peng (Peking University), Simon J. E. Taylor (Brunel University London) Simulation and AI Simulation and AI Methodology I Chair: Yijie Peng (George Mason University) Batching on Biased Estimators Shengyi He and Henry Lam (Columbia University) Abstract Abstract Existing batching methods are designed to cancel the variability parameter but not the bias of estimators, and thus are applied typically in the setting of unbiased estimation. We provide a batching scheme that cancel out the bias and variability parameters of estimators simultaneously, yielding asymptotically exact confidence intervals for biased estimation problems. We apply our batching method to finite difference estimators. We extend our method to the multivariate case in constructing confidence regions. We validate our theory and analyze the effect of the number of batches through numerical examples. Distributional Input Uncertainty Motong Chen, Zhenyuan Liu, and Henry Lam (Columbia University) Abstract Abstract The vast majority of the simulation input uncertainty literature focuses on estimating target output quantities that are real-valued. However, outputs of simulation models are random and real-valued targets essentially serve only as summary statistics. In this paper, we study the input uncertainty problem from a distributional view, namely we construct simultaneous confidence bands for the entire output distribution function. Our approach utilizes a novel test statistic that consists of the supremum of the sum of a Brownian bridge and a mean-zero Gaussian process whose covariance function is characterized by the influence function of the true output distribution function, which generalizes the Kolmogorov-Smirnov statistic to account for input uncertainty. We demonstrate how subsampling helps estimate the covariance function of the Gaussian process, thereby leading to an implementable estimation of the quantile of the test statistic and a statistically valid confidence band. Finally, we present some supporting numerical experiments. Simulation of Stance Perturbation Peter Carragher, Lynnette Hui Xian Ng, and Kathleen Carley (Carnegie Mellon University) Abstract Abstract In this work we use Agent Based Modelling (ABM) to determine when & why intentional social influence operations are likely to succeed. We propose ABM as an evaluation method for intentional stance perturbations in social networks. We do so by developing and verifying an evaluation criterion for stance perturbations, then developing a co-evolutionary Social Influence (SI) model, and finally expounding on an analysis of parameters effecting stance perturbations. Simulation and AI Simulation and AI Methodology II Chair: Claudia Szabo (University of Adelaide, The University of Adelaide) Use of Reinforcement Learning for Prioritizing Communications in Contested and Dynamic Environments Dustin Craggs and Kin Leong Lee (University of Adelaide); Claudia Szabo (University of Adelaide, The University of Adelaide); Vanja Radenovic (Consilium Technology); and Benjamin Campbell (Defence Science and Technology Group) Abstract Abstract Systems operating in military operations and crisis situations usually do so in contested and dynamic environments with poor and unreliable network conditions. Individual nodes within these systems usually have an incomplete, local and changing view of the system and its operating environment, and as such optimizing how nodes communicate in order to improve decision making is critical. In this paper, we propose the integration of reinforcement learning algorithms with the SMARTNet middleware, a middleware that prioritizes and controls messages sent by each node, allowing it to determine the best priority for each message type. We experiment with both direct and indirect prioritisation approaches, where the reinforcement learning dissemination system determines the specific priority of a message on its arrival or upon its sending respectively. Our experimental analysis show significant improvements over the baseline in some of the high congestion scenarios but also highlights several avenues for future work. Human Imperceptible Attacks and Applications to Improve Fairness Xinru Hua, Huanzhong Xu, and Jose Blanchet (Stanford University) and Viet Anh Nguyen (VinAI Research) Abstract Abstract Modern neural networks are able to perform at least as well as humans in numerous tasks involving object classification and image generation. However, small perturbations which are imperceptible to humans may significantly degrade the performance of well-trained deep neural networks. We provide a Distributionally Robust Optimization (DRO) framework which integrates human-based image quality assessment methods to design optimal attacks that are imperceptible to humans but significantly damaging to deep neural networks. Through extensive experiments, we show that our attack algorithm generates better-quality (less perceptible to humans) attacks than other state-of-the-art human imperceptible attack methods. Moreover, we demonstrate that DRO training using our optimally designed human imperceptible attacks can improve group fairness in image classification. Towards the end, we provide an algorithmic implementation to speed up DRO training significantly, which could be of independent interest. Towards AI Robustness Multi-agent Adversarial Planning in Game Play Yan Lu and Sachin Shetty (Old Dominion University) Abstract Abstract This paper applied Monte Carlo Tree Search (MCTS) and its variant algorithm “Upper Confidence bounds applied to Trees” (UCT) to Chinese checker game and visualized it in the Ludii games portal. Since the MCTS approach cannot guarantee a finite game length in our simulation and the winning strategy is inefficient, we developed a Convolution Neural Network (CNN) model formulated on top of the MCTS algorithm to improve the performance of the Chinese checker game. Our experiment and simulations show that by using the weak labels generated by MCTS algorithm, the CNN-based model could learn without supervised signals from human interventions and execute the game strategy in a finite time. PyGame and the Ludii game portal are used in our simulation to visualize the game and show the final game results. Simulation and AI Simulation and Artificial Intelligence: A Foundation for a New, Reimagined Tomorrow? Chair: Simon J. E. Taylor (Brunel University London) Simon J. E. Taylor (Brunel University London), Wentong Cai (Nanyang Technologicial University), L. Jeff Hong (Fudan University), Bodhibrata Nag (Indian Institute of Management Calcutta), Chew Ek Peng (National University of Singapore), and Claudia Szabo (University of Adelaide) Abstract Abstract How do Modeling & Simulation (M&S) and Artificial Intelligence (AI) help us to understand the world around us? M&S techniques build a model from observations of a given system and simulate that model to make observations and to draw conclusions. AI uses logic and rules applied to large datasets taken from the system to derive observations and conclusions. M&S can be used where there is little data. However, models take time to build and experimentation can be computationally demanding. AI can make predictions from data rich environments. However, predications can lack explainability and AI can struggle to make “what-if” decisions from new scenarios. The weaknesses from both techniques are also their complementary strengths. The combination of M&S and AI together may create new and more powerful decision support tools that can benefit our global society. Simulation and AI Reinforcement Learning I Chair: Edward Y. Hua (MITRE Corporation) A Customizable Reinforcement Learning Environment for Semiconductor Fab Simulation Benjamin Kovács, Pierre Tassel, and Martin Gebser (Alpen-Adria-Universität Klagenfurt) and Georg Seidel (Infineon Technologies Austria AG) Abstract Abstract Reinforcement learning based methods are increasingly used to solve NP-hard combinatorial optimization problems. By learning from the problem structure, or the characteristics of instances, the approach has high potential compared to alternative techniques solving all instances from scratch. This work introduces a novel framework for creating (deep) reinforcement learning environments simulating up to real-world scale semiconductor fab scheduling problem instances. The highly configurable framework supports creating single- and multi-agent environments where the simulation factory is either partially or fully controlled by the learning agents. The action and observation spaces and the reward function are customizable based on pre-defined features. Our toolkit creates environments with a standard interface that can be integrated with various algorithms in a few minutes. The simulated datasets may involve challenging features like downtimes, batching, rework, and sequence-dependent setups. These can also be turned off and simulated datasets be automatically downscaled during the prototyping phase. Simulation of the Internal Electric Fleet Dispatching Problem at a Seaport: A Reinforcement Learning Approach Matteo Brunetti, Giovanni Campuzano, and Martijn Mes (University of Twente) Abstract Abstract Through discrete-event simulation, we evaluate the impact of using a fleet of electric and autonomous vehicles (EAVs) to decouple inbound trucks from the internal freight flows in a seaport located in the Netherlands. To support the operational control of EAVs, we use agent-based modeling and support the decision-making capabilities using a reinforcement learning (RL) approach. More specifically, to model the assignment of EAVs to container transport or battery charge, we introduce the Internal Electric Fleet Dispatching Problem (IEFDP). To solve the IEFDP, we propose an RL approach and benchmark its performance against four different assignment heuristics. Our results are compelling: the RL approach outperforms the benchmark heuristics, and the decoupling process significantly reduces congestion and waiting times for truck drivers as well as potentially improve the traffic’s sustainability, against a slight increase in length of stay of containers at the port. Supplier Forecasting with Visibility Using Privacy Preserving Federated learning Bo Zhang, Wen Jun Tan, and Wentong Cai (Nanyang Technological University) and Allan N. Zhang (Agency for Science, Technology and Research) Abstract Abstract In the fluctuating and unstable supply chain environment, accurate demand forecasting is especially important. To improve the prediction accuracy, one possible way is to improve supply chain visibility by sharing information and knowledge among the supply chain entities. However, there is a potential risk that the raw data may be leaked to the competitors, affecting the business opportunities. To avoid information leakage, a secure demand forecasting with supply chain visibility is necessary. This paper proposes a Federated Learning based approach to predict demand for supplier with supply chain visibility while protecting the data privacy of other entities within the supply chain. To evaluate performance of forecasting accuracy, we designed a supply chain simulation model to generate data. From the experimental results, our proposed method outperforms the other demand forecasting methods without visibility and achieves similar performance to the method with full visibility. Simulation and AI Reinforcement Learning II Chair: Edward Y. Hua (MITRE Corporation) Automatically Explaining a Model: Using Deep Neural Networks to Generate Text From Causal Maps Anish Shrestha, Kyle Mielke, Tuong Anh Nguyen, and Philippe J. Giabbanelli (Miami University) Abstract Abstract Simulation models start as conceptual models, which list relevant factors and their relationships. In complex socio-environmental problems, these conceptual models are routinely created with participants, via a `participatory modeling' approach. Transparency is a tenet of participatory modeling: participants should easily provide their input into the model-building process and see how that input is utilized. Although several elicitation methods are transparent, the resulting conceptual model can become too large and difficult to interpret. Usability studies have shown that participants struggle to interact with such large conceptual models, even if they contributed to creating parts of it. In this paper, we propose to automatically transform these large conceptual models into a more familiar format for participants: textual reports. We designed and implemented a process combining Natural Language Generation (via the deep learning GPT-3 model) and Network Science. Two case studies demonstrate that our prototype generates sentences that perform satisfactorily on several metrics. Quantile-based Policy Optimization for Reinforcement Learning Jinyang Jiang (Peking University), Jiaqiao Hu (Stony Brook University), and Yijie Peng (Peking University) Abstract Abstract Classical reinforcement learning (RL) aims to optimize the expected cumulative rewards. In this work, we consider the RL setting where the goal is to optimize the quantile of the cumulative rewards. We parameterize the policy controlling actions by neural networks and propose a novel policy gradient algorithm called Quantile-Based Policy Optimization (QPO) and its variant Quantile-Based Proximal Policy Optimization (QPPO) to solve deep RL problems with quantile objectives. QPO uses two coupled iterations running at different time scales for simultaneously estimating quantiles and policy parameters. Our numerical results demonstrate that the proposed algorithms outperform the existing baseline algorithms under the quantile criterion. Reinforcement Learning With Discrete Event Simulation: The Premise, Reality, And Promise Sahil Belsare, Emily Diaz Badilla, and Mohammad Dehghanimohammadabadi (Northeastern University) Abstract Abstract Several studies have shown the success of Reinforcement Learning (RL) for solving sequential decision-making problems in domains like robotics, autonomous vehicles, manufacturing, supply chain, and health care. For such applications, uncertainty in real-life environments presents a significant challenge in training an RL agent. RL requires a large number of trials (training examples) to learn a good policy. One of the approaches to tackle these obstacles is augmenting RL with a Discrete Event Simulation (DES) model. Learning from a simulated environment, makes the training process of the RL agent more efficient, faster, and even safer by alleviating the need for expensive real-world trials. Therefore, integrating RL algorithms with simulation environments has inspired many researchers in recent years. In this paper, we analyze the existing literature on RL models using DES to put forward the benefits, application areas, challenges, and scope for future work in developing such models for industrial use cases. Simulation and AI Simulation and AI Methodology III Chair: Kim van den Houten (Technische Universiteit Delft) Analysis of Measure-Valued Derivatives in a Reinforcement Learning Actor-Critic Framework Kim van den Houten (TU Delft) and Emile Van Krieken and Bernd Heidergott (Vrije Universiteit Amsterdam) Abstract Abstract Policy gradient methods are successful for a wide range of reinforcement learning tasks. Traditionally, such methods utilize the score function as stochastic gradient estimator. We investigate the effect of replacing the score function with a measure-valued derivative within an on-policy actor-critic algorithm. The hypothesis is that measure-valued derivatives reduce the need for score function variance reduction techniques that are common in policy gradient algorithms. We adapt the actor-critic to measure-valued derivatives and develop a novel algorithm. This method keeps the computational complexity of the measure-valued derivative within bounds by using a parameterized state-value function approximation. We show empirically that measure-valued derivatives have comparable performance to score functions on the environments Pendulum and MountainCar. The empirical results of this study suggest that measure-valued derivatives can serve as low-variance alternative to score functions in on-policy actor-critic and indeed reduce the need for variance reduction techniques. Tree-Structured Parzen Estimators With Uncertainty For Hyperparameter Optimization Of Machine Learning Algorithms Alejandro Morales-Hernández, Inneke Van Nieuwenhuyse, and Sebastian Rojas Gonzalez (Hasselt University, Data Science Institute) Abstract Abstract Hyperparameter optimization (HPO) is one of the first tasks to be performed during the application of Machine Learning (ML) algorithms to real problems. Tree-structured Parzen estimators (TPE) have demonstrated their ability to find hyperparameter configurations in high dimensions with efficient evaluation budgets. However, as is common in HPO procedures, TPE ignores the fact that the expected performance of the algorithm, for any given HPO configuration, is affected by uncertainty. Building on the TPE algorithm proposed by Bergstra et al. (2011), we propose a strategy to account for this uncertainty and show that its management leads to better algorithm performance. Enhanced Simulation Metamodeling via Graph and Generative Neural Networks Wang Cen and Peter J. Haas (University of Massachusetts Amherst) Abstract Abstract For large, complex simulation models, simulation metamodeling is crucial for enabling simulation-based-optimization under uncertainty in operational settings where results are needed quickly. We enhance simulation metamodeling in two important ways. First, we use graph neural networks (GrNN) to allow the graphical structure of a simulation model to be treated as a metamodel input parameter that can be varied along with real-valued and integer-ordered inputs. Second, we combine GrNNs with generative neural networks so that a metamodel can rapidly produce not only a summary statistic like E[Y], but also a sequence of i.i.d. samples of Y or even a stochastic process that mimics dynamic simulation outputs. Thus a single metamodel can be used to estimate multiple statistics for multiple performance measures. Our metamodels can potentially serve as surrogate models in digital-twin settings. Preliminary experiments indicate the promise of our approach. Simulation and AI Simulation and AI Methodology IV Chair: Ruijiu Mao (National University of Singapore) An Efficient Dynamic Sampling Policy for Monte Carlo Tree Search Gongbo Zhang and Yijie Peng (Peking University) and Yilong Xu (Beijing Jiaotong University) Abstract Abstract We consider the popular tree-based search strategy within the framework of reinforcement learning, the Monte Carlo Tree Search (MCTS), in the context of finite-horizon Markov decision process. We propose a dynamic sampling tree policy that efficiently allocates limited computational budget to maximize the probability of correct selection of the best action at the root node of the tree. Experimental results on Tic-Tac-Toe and Gomoku show that the proposed tree policy is more efficient than other competing methods. Using Generative Adversarial Networks To Validate Discrete Event Simulation Models José Arnaldo Barra Montevechi, Gustavo Teodoro Gabriel, Afonso Teberga Campos, Carlos Herinque Santos, and Fabiano Leal (Universidade Federal de Itajubá) and Michael Machado (Flexsim Brazil) Abstract Abstract Computer model validation is an essential step in simulation projects. The literature suggests using statistical techniques for comparing the outputs from the simulated model and the real system; however, statistical assumptions may be violated. Thus, Generative Adversarial Networks (GANs) are an alternative since they adapt to any data. The work aims to use GANs to generate synthetic data from the real data and use the Discriminator to discriminate real from simulated outputs. Five statistical distributions were trained, and distributions with the same characteristics were submitted to verify the Power of the Test. The curves of each distribution were generated. In addition, a real case of a Discrete Event Simulation in a large emergency department was applied to the new validation technique. The results showed that GANs effectively discriminate data and can help validate computer models. Interpretable User Behavioral Analysis and Personalized Recommendation with Side Information Chen Feng (Purdue University) and Ruijiu Mao (National University of Singapore) Abstract Abstract Due to information overload, recommender system is developed and widely applied to better predict customer behaviors empowered by interaction data. In this paper, we develop an interpretable recommender system to improve click behavior prediction accuracy. Our proposed recommender system consists of i) designing a user-item interaction rating system defined as the weighted-sum of the multiple interaction variables, where the weight vector is regarded as hyper-parameter, ii) using matrix factorization technique to capture the latent structure from the observable user-item interaction, and predict personalized ratings for items not shown to users, and iii) determining the weight vector by optimizing AUC which evaluates click behavior prediction accuracy on a validation set. To be specific, an algorithm with radial basis function surrogates is applied for hyperparameter optimization. We also develop a case study with user behavioral data from Netease music and observe a 15.5% improvement in AUC when incorporating multiple information. Simulation and AI Applications I Chair: Dehghani Mohammad (Northeastern University) A Simulation-aided Deep Reinforcement Learning Approach for Optimization of Automated Sorting Center Processes Deepak Mohapatra, Aritra Pal, Ankush Ojha, Supratim Ghosh, Marichi Agarwal, and Chayan Sarkar (Tata Consultancy Services Ltd) Abstract Abstract Operations in a parcel sorting center (SC) are multi-fold which lead to multiple NP-hard optimization problems, namely, parcel-chute assignment, online bin-packing, scheduling, and routing. The advent of multi-agent robotics has accelerated the process of automation in sorting centers which has led to the requirement of sophisticated algorithms to optimize these operations within an SC. To this end, we propose RL−SORT: a simulation-aided deep reinforcement learning based algorithm which jointly optimizes the parcel-chute assignment and online roller-cage (RC) packing problems. Through experimentation on our simulation framework, we show that RL−SORT not only outperforms baselines, but also has a low computational burden. Further, it is able to significantly reduce the number of RCs used, thereby, reducing the transportation costs. Predictive Maintenance Powered By Machine Learning And Simulation Csaba Attila Boer, Mernout Burger, Yvo Saanen, and Edwin Straub (TBA Group) Abstract Abstract To optimize the balance between costs and reliability of yard cranes, it is important to perform maintenance when the risk of failures becomes high while possibly delaying planned maintenance when the yard crane shows no signs of possible problems. To accomplish this, we investigate the possibility of applying predictive maintenance for yard cranes. The application of predictive maintenance requires historical data collection and preprocessing of equipment sensor and maintenance data. To get a feeling of the possibilities and limitations of predictive maintenance for yard cranes, before investing time and money to collect operational data, we have used simulations to generate synthetic data for a few components of the cranes. Using the simulated crane data, a neural network was trained to predict upcoming component failures. The results show that using simulation we can identify the possibilities and limitations of machine learning for predicting failures of components of the crane. Applied Reinforcement Learning for Decision Making in Industrial Simulation Environments Ashwin Devanga, Emily Diaz Badilla, and Mohammad Dehghanimohammadabadi (Northeastern University) Abstract Abstract The industrial sector is going through a digital transformation with a high emphasis on Artificial Intelligence-driven operations. In this transformation, the combination between simulation and machine learning has been a key enhancer to provide state-of-the-art solutions to complex systems by training algorithms over virtual representations of industry. In recent years, Reinforcement Learning (RL) has gained traction in this field and shown success in solving sequential decision-making problems. The purpose of this paper is to show how to implement simulation-based Reinforcement Learning. The proposed framework integrates Simio, as a discrete-event simulation environment, and Python, to include the RL algorithm. To demonstrate the applicability of this framework, a job-shop scheduling problem under different scenarios is tested and its results are compared with benchmark heuristic dispatching rules. Modeling Methodology, Simulation and AI Model Recognition and Identification Chair: Edward Y. Hua (MITRE Corporation) Can Machines Solve General Queueing Problems? Eliran Sherzer (University of Toronto), Arik Senderovich (York University), and Dmitry Krass and Opher Baron (University of Toronto) Abstract Abstract We study how well a machine can solve a general problem in queueing theory, using a neural net to predict the stationary queue-length distribution of an M/G/1 queue. This problem is, arguably, the most general queuing problem for which an analytical ``ground truth'' solution exists. We overcome two key challenges: (1) generating training data that provide ``diverse'' service time distributions, and (2) providing continuous service distributions as input to the neural net. To overcome (1), we develop an algorithm to sample phase-type service time distributions that cover a broad space of non-negative distributions; exact solutions of M/PH/1 (with phase-type service) are used for the training data. For (2) we find that using only the first n moments of the service times as inputs is sufficient to train the neural net. Our empirical results indicate that neural nets can estimate the stationary behavior of the M/G/1 extremely accurately. Simulation Based Approach for Reconfiguration and Ramp up Scenario Analysis in Factory Planning Florian Schmid (OTH Regensburg); Tobias Wild (Simplan AG); and Jan Schneidewind, Tobias Vogl, Lukas Schuhegger, and Stefan Galka (OTH Regensburg) Abstract Abstract Structural changes in production entail a potential economic risk for manufacturing companies. It is necessary to identify a suitable strategy for the reconfiguration process and to continue to meet the demand during the change in the factory structure and ramp-up phase. A simulation offers the possibility to analyze different ramp-up scenarios for the factory structure and to select a suitable concept for the reconfiguration process. A discrete event simulation approach is presented that can be used to evaluate variants of structural changes and serves as a basis for deciding on a reconfiguration strategy. This approach is demonstrated using a specific production step of a plant producing hydrogen electrolyzers, the results and generalized conclusions are discussed. Simulation and AI Applications II Chair: Simon J. E. Taylor (Brunel University London) Simulation Optimization for Supply Chain Decision Making Bodhibrata Nag (IIM Calcutta) and Ranjan Pal (University of Cambridge) Abstract Abstract Supply chain design and optimization have been a subject of interest for academia and industry alike. We focus on stochastic and hybrid models in this paper since they closely approximate reality. This paper explains the structure of supply chains, decisions required to be taken in a typical supply chain, and models developed for supply chain design and optimization. The paper further explores optimization via simulation to solve stochastic and hybrid models, its applications in the supply chain domain and future research directions arising out of recent emphasis on sustainability, robustness and resilience of supply chains and the opportunities offered by advances in Industry 4.0, Machine Learning and Big Data. Transfer Learning For Prediction Of Supply Air Temperature From A Cooling System In An Existing Building Seongkwon Cho, Seonjung Ra, and Cheol-Soo Park (Seoul National University) Abstract Abstract Although it is widely acknowledged that model predictive control (MPC) using data-driven models can be beneficially used for building control, sometimes developing high-fidelity data-driven models is not easy due to lack of diverse and sufficient data. In this paper, the authors present an application of transfer learning (TL) for predicting supply air temperature from a cooling system in an existing building. For TL, a physics-based building simulation model, EnergyPlus was used to generate synthetic data. Then, an ANN model, surrogate of the EnergyPlus model was developed and then fine-tuned with measured data. It is shown that the model developed via TL is good enough for the prediction and enhances data efficiency when only a limited measured dataset is available. DES-based reinforcement learning for the optimization of the scheduling problems of manufacturing systems Seung Heon Oh, Young In Cho, So Hyun Nam, Hee Chang Yoon, and Ki Young Cho (Seoul National University, Department of Naval Architecture and Ocean Engineering); Gun Woong Byun (Seoul National University, Research Institute of Marine Systems Engineering); Dong Hoon Kwak (Seoul National University, Department of Naval Architecture and Ocean Engineering); and Jong Hun Woo (Seoul National University; Department of Naval Architecture and Ocean Engineering, Research Institute of Marine Systems Engineering) Abstract Abstract Discrete event simulation is an effective method for environment modeling for solving scheduling problems in manufacturing systems through reinforcement learning. In this study, four characteristics (reliability, rapidity, interoperability, and cost-effectiveness) that a discrete event simulation environment should have to apply reinforcement learning to the scheduling problems are specified. Then, environment development framework based on SimPy, an open source DES that satisfies these characteristics is proposed. Finally, successful industrial cases using this framework are introduced. Track Coordinator - Simulation as Digital Twin: Na Geng (Shanghai Jiaotong University), Andrea Matta (Politecnico di Milano), Yuan Wang (Singapore University of Social Science) Simulation as Digital Twin Digital Twins Applications Chair: Yuan Wang (Singapore University of Social Science) Digital Twins for the Dynamic Management of Blockchain Systems Georgios Theodoropoulos (SUSTech), Tziritas Nikos (University of Thessaly), and Rami Bahsoon and Georgios Diamantopoulos (University of Birmingham) Abstract Abstract Blockchain systems are challenged by the so-called Trilemma tradeoff: decentralization, scalability, and security. Infrastructure and node configuration, choice of the Consensus Protocol, and complexity of the application transactions are cited among the factors that affect the tradeoff balance. Given that Blockchains are complex, dynamic systems, a dynamic approach to their management and reconfiguration at runtime is deemed necessary to reflect the changes in the state of the infrastructure and application. This paper introduces the utilization of Digital Twins for this purpose. The novel contribution of the paper is the design of a framework and conceptual architecture of a Digital Twin that can assist in maintaining the Trilemma tradeoffs of time-critical systems. The proposed Digital Twin is illustrated via an innovative approach to dynamic selection of Consensus Protocols. Simulation results show that the proposed framework can effectively support the dynamic adaptation and management of the Blockchain. Real-Time Spatio-Temporal Databases: Bridging The Gap Between Experimentable Digital Twins And Databases Moritz Alfrink (RWTH Aachen University, Institute of Man-Machine-Interaction) and Jürgen Roßmann (RWTH Aachen University, Electrical Engineering) Abstract Abstract Digital Twins (DTs) have come to be an indispensable methodology in the fields of robotics, autonomous driving, sensor simulation, Hardware-in-the-loop (HIL), Augmented reality (AR) and a variety of other fields. In this paper, we argue why databases supporting the three-dimensional (3-D) simulation of Experimentable Digital Twins (EDTs) have strict criteria, including real-time capabilities, spatial organizability of data, and temporal awareness in data representation. We thus propose the Real-Time Spatio-Temporal Database (RTSTDB) as an architecture combining these capabilities and thus meeting all EDT requirements. To facilitate real-time operation, our implementation of this concept utilizes the highly parallelized vector capabilities of Graphics processing units (GPUs). We show the successful application of our approach by providing two use cases that exhibit the identified requirements. Finally, we define a standardization for RTSTDB interfaces to maximize interoperability. Simulation as Digital Twin Methodologies for Digital Twins Chair: Cathal Heavey (University of Limerick) Validation of Digital Twins: Challenges and Opportunities Edward Hua (The MITRE Corporation), Sanja Lazarova-Molnar (Karlsruhe Institute of Technology), and Deena Francis (Technical University of Denmark) Abstract Abstract Digital twins enjoy increasing interest in a diverse array of industrial sectors, such as manufacturing, healthcare, urban planning, etc. Their usefulness depends on the robustness of the corresponding digital twin models; however, validation of the model, as a mean to ensure models’ robustness, is a difficult problem. Moreover, traditional validation approaches need to undergo significant transformation to be made applicable to digital twins. To the best of our knowledge, there has not been a systematic treatment of validating digital twin models. This paper identifies several challenges facing the model validation within digital twins. We, furthermore, propose an initial framework to define basic rules of digital twin model validation and introduce a systematic approach to validation that seamlessly combines expert knowledge and data gathered from available Internet of Things (IoT) devices. Online Validation of Simulation-based Digital Twins Exploiting Time Series Analysis Best Contributed Applied Paper - Finalist Giovanni Lugaresi, Sofia Gangemi, Giulia Gazzoni, and Andrea Matta (Politecnico di Milano) Abstract Abstract Recently, the development of new technologies within the Industry 4.0 revolution enabled to increase the digitization level of manufacturing plants. To benefit from the functionalities of digital twins, it is essential to guarantee a correct alignment between the physical system and the associated digital model along the whole system life cycle, and to assess the validity of the digital model online. Traditional validation techniques cannot be applied for such purpose because of the restrictive assumptions and the need of large amounts of data. This work proposes a novel online validation method to assess the correctness of Discrete Event Simulation models. Validation is done by treating shop-floor data as sequences, and measuring the similarity between the data streams from the physical system and its corresponding digital model. The proposed method has been tested via offline experiments and with an application within a digital twin architecture exploiting a lab-scale manufacturing system. Optimizing Digital Twin Synchronization in a Finite Horizon Best Contributed Theoretical Paper - Finalist Baris Tan (Koc University) and Andrea Matta (Politecnico di Milano) Abstract Abstract Given the tendency to increase the complexity of digital twins to capture a manufacturing system in the most detailed way, synchronizing and using a complex digital twin with the real-time data may require significant resources. We define the optimal synchronization problem to operate the digital twins in the most effective way by balancing the trade-off between improving the accuracy of the simulation prediction and using more resources. We formulate and solve the optimal synchronization problem for a special case. We analyze the characteristics of the state-dependent and state-independent optimal policies that indicate when to synchronize the simulation at each decision epoch. Our numerical experiments show that the number of synchronizations decreases with the synchronization cost and with the system variability. Furthermore, a lower average number of synchronizations can be achieved by using a state-dependent policy. Simulation as Digital Twin Digital Twins in Logistics and Transportation I Chair: Andrea Matta (Politecnico di Milano) Data-based Digital Twin of an Automated Guided Vehicle System Isabella Lichtenstern and Florian Kerber (Augsburg University of Applied Sciences) Abstract Abstract In recent years, automated guided vehicles (AGV) have matured as a technology to make intralogistics more flexible. However, the integration of fleets of AGVs into existing production facilities still poses challenges. In this paper, a data-driven digital twin model of a system of AGVs is developed. It is based on an analysis of real-time data to replicate the driving behavior of individual vehicles as well as transport order management, vehicle selection, and travel process control more accurately. For the use case of a hybrid flow shop preassembly, the digital twin is used to optimize the number of AGVs required for the target throughput of the factory as well as to compare different path network topologies. Due to the modular design and flexible interfaces, the digital twin can easily be applied in other production scenarios as well. Warehouse Digital Twin: Simulation Modeling and Analysis Techniques Michael E. Kuhl, Sriparvathi Bhattathiri, Rukshar Bhisti, and Maojia Li (Rochester Institute of Technology) Abstract Abstract As an integral component of Industry 4.0, digital twin simulations have the potential to transform the material handling and supply chain industry. In this paper, we provide a digital twin framework for warehouse systems and the methodologies used to implement them. In addition, we present examples of practical digital twin applications in the warehouse environment. Finally we discuss some of the challenges that need to be overcome to ease implementation and expand adoption of warehouse digital twins. Digital Twin-driven Design and Optimization Method for Smart Warehouse Zhenyong Wu (School of Management Science and Engineering, Nanjing University of Information Science & Technology); Yuan Wang (School of Business, Singapore University of Social Sciences); Hanzhao Wu (College of Computer and Communication Engineering, Zhengzhou University of Light Industry); Wei Zhang (School of Mathematics, South China University of Technology); and Rong Zhou (Institute of High Performance Computing, A*STAR - Agency for Science, Technology and Research of Singapore) Abstract Abstract Current new generation information technologies lead to high flexibility and agility in the warehouse system. The digital twin technologies have caught the attention of industry and academia. This paper proposes a digital twin-driven design and optimization approach for smart warehouse system. A method for digital twin-driven warehouse design is developed to aggregate from physical warehouse system and to map to the virtual model. An optimization model aiming to timely optimise goods packing and storage assignment is proposed to integrate to the digital twin system. A case study on a real warehouse design is provided to validate and illustrate the proposed approach. Simulation as Digital Twin Digital Twins in Logistics and Transportation II Chair: Jie Xu (George Mason University) Studying Logistic Fleet Electrification Using Traffic Microsimulation Software Frederick Pringle, Nagacharan Teja Tangirala, and Muhammad Sajid Ali (Technische Universität München); David Eckhoff (TUMCREATE Limited); and Alois Christian Knoll (Technische Universität München) Abstract Abstract Electric vehicles (EVs) can make a significant contribution to addressing global climate change. However the adoption of EVs for road freight transport is low, owing to challenges such as limited range, cargo capacity constraints, and charging times. This paper briefly describes a simulation-based approach to studying the feasibility of EV adoption for road freight transport. With the help of the microscopic traffic simulator CityMoS, we assess the impact of electrification from an operational as well as climate perspective. Predictive Traffic Blocking to Avoid Congestion in Large-scale Automated Material Handling Systems Hae Joong Kim (Korea National University of Trnasportation), Ri Choe (Samsung Electronics), Daeeun Lim (Kangwon National University), and Sangmin Lee (Kwangwoon University) Abstract Abstract This study focuses on resolving traffic among autonomous vehicles in large-scale semiconductor manufacturing plants. Although automated material handling systems (AMHSs) provide rapid wafer transfer services by controlling thousands of autonomous vehicles between facilities in real time, traffic congestion is a significant issue, which increases the probability of slow wafer transfer, resulting in a failed production schedule, and idleness of bottleneck production machines. Therefore, to avoid sudden, inevitable heavy congestion and prevent systematic bottlenecks, this study proposes a method based on a machine learning technique that prevents the expansion of local traffic congestion by predicting and blocking the extension boundary of the congestion area. High fidelity simulations showed that the proposed method effectively reduced the transfer inefficiency of AMHSs due to traffic congestion, compared to existing rule-based methods, and can ultimately preserve semiconductor productivity in manufacturing plants and ensure stable wafer delivery. A Digital Twin Based Approach to Smart Lighting Design Elham Mohammadrezaei, Alexander Giovannelli, Logan Lane, and Denis Gracanin (Virginia Tech) Abstract Abstract Lighting has a critical impact on user mood and behavior, especially in architectural settings. Consequently, smart lighting design is a rapidly growing research area. We describe a digital twin-based approach to smart lighting design that uses an immersive virtual reality digital twin equivalent (virtual environment) of the real-world, physical architectural space to explore the visual impact of light configurations. The CLIP neural network is used to obtain a similarity measure between a photo of the physical space with the corresponding rendering in the virtual environment. A case study was used to evaluate the proposed design process. The obtained similarity value of over 87% demonstrates the utility of the proposed approach. Simulation as Digital Twin Digital Twins in Manufacturing Chair: Giovanni Lugaresi (Politecnico di Milano) Data-Driven Simulation For Production Balancing And Optimization: A Case Study In The Fashion Luxury Industry Andrea Nunziatini, Virginia Fani, Bianca Bindi, Romeo Bandinelli, and Mario Tucci (University of Florence) Abstract Abstract The paper presents the definition of a data-driven simulation framework and its development using AnyLogic® software for production balancing and optimization in the leather luxury accessories industry. The model has been developed including an industry-oriented set of objects in order to easily replicate the resource configuration and layout mainly used within the analysed context. A case study is therefore conducted to validate the model and confirm its usability. The results of this work demonstrated that the proposed framework can be easily adapted to a shoe joiner using the pre-configured objects. Scenarios have been carried out based on what-if analyses regarding productivity, resources saturation and bottlenecks. This tool can therefore be used as a valuable support to production planning, scheduling optimization, workload and production cycle balancing. Towards a Digital Twin of a Robot Workcell to Support Prognostics and Health Management Deogratias Kibira and Brian A. Weiss (National Institute of Standards and Technology) Abstract Abstract Current maintenance research often includes modeling the equipment degradation to determine when any degradation will exceed a specified threshold. Such models provide critical intelligence to determine an impending failure and promote the timely scheduling of maintenance, yet, the models require equipment data. While healthy state data can be readily captured from a system, degraded or failure state data is more difficult to acquire because equipment are normally operating in a healthy state. The degradation process can be modeled in a digital twin to generate failing health data. This paper presents work that is a step in the process of realizing a digital twin for this purpose. A procedure for modeling a robot workcell in a healthy state is described. We discuss how degradations will be incorporated into the robot to generate degraded data that can be used to predict future states of the robot and support decision-making. Enterprise Digital Twin For Risk Free Business Experimentations Souvik Barat, Vinay Kulkarni, and Kaustav Bhattacharya (Tata Consultancy Services Ltd) Abstract Abstract While Digital Twin technology is a proven cost-effective experimentation aid for analyzing physical systems, its effective exploitation is yet to be seen in the enterprise space. Enterprises that are operating in a dynamic environment still address critical business problems like product redesign, improvement of customer satisfaction, enhancement operational efficiency and business transformations through intuition based trial-and-error approach. Being not grounded in precise scientific laws as physical systems, the system of system nature, inherent uncertainty, inadequate and fragmented data, and increasingly dynamic operating environment make it difficult for existing enterprise modelling techniques to serve as digital twin of enterprise. We developed a pragmatic and robust Enterprise Digital Twin (EDT) as an aid to “in-silico” simulation-driven business experimentation for evidence-backed decision-making with possible tradeoffs. This paper presents our approach, key considerations of EDT and their rationales. It also demonstrates the efficacy of EDT with an illustrative business case of a telecom organization. Track Coordinator - Simulation Down Under: John Fowler (Arizona State University), David Post (CSIRO) Environmental and Sustainability Applications, Simulation Down Under Simulation Down Under Chair: David Post (CSIRO) Modelling to Support Climate Adaptation in the Murray-Darling Basin, Australia David Post and David Robertson (CSIRO), Rebecca Lester (Deakin University), and Francis Chiew and Jorge Pena-Arancibia (CSIRO) Abstract Abstract Many models exist to assess the hydrological impacts of climate change. Some models even exist to assess the hydrological impacts of climate adaptation options. There are however a much smaller number of models designed to assess the impacts of climate adaptation options on socio-economics, the community and the environment more widely. A current program of work known as MD-WERP – the Murray-Darling Water and Environment Research Program, seeks to improve the understanding and representation of key processes in models used to underpin Basin analysis and planning. We are working with policy makers and water managers in Australian State and Federal governments to assess the impacts of climate change and climate adaptation options on hydrological, ecological and socio-economic outcomes in the Murray-Darling Basin (MDB). This will allow the Murray-Darling Basin Authority to consider a wide range of adaptation options in the review of the Murray-Darling Basin Plan scheduled for 2026. Model-Data Integration: Working Together and Systematically Resolving Discrepancies Matthew Adams (Queensland University of Technology), Felix Egger and Katherine O'Brien (The University of Queensland), Maria Vilas (Queensland Government), Hayley Langsdorf (Thoughts Drawn Out), Paul Maxwell (EcoFutures and Alluvium Consulting), Andrew O'Neill (Healthy Land and Water), Holger Maier (The University of Adelaide), Jonathan Ferrer-Mestres (Commonwealth Scientific and Industrial Research Organisation), Lachlan Stewart (Queensland Government), and Barbara Robson (Australian Institute of Marine Science) Abstract Abstract Models and data have an important but sometimes uneasy relationship. Data provide snapshots of what is happening in the system, whilst models can explore the underlying processes driving the observed behavior. Thus models and data together provide a more comprehensive description of the system than either one alone. However, engagement between modellers and those who collect data can sometimes be challenging, especially if there are discrepancies between models and the data. This presentation showcases our recent work to bridge this divide by introducing (1) a systematic framework for addressing model-data discrepancies, and (2) an action plan to improve relationships between those who model and those who measure. The systematic framework aims to equally balance the potential for discrepancies to arise from data and/or models. The action plan is presented as a light-hearted animation which highlights that modellers and data collectors both want the same thing: better decisions from better science. Water Forecasts For Enhanced Environmental Water Delivery Richard Laugesen (Australian Bureau of Meteorology, University of Adelaide) and Alex Cornish and Adam Smith (Australian Bureau of Meteorology) Abstract Abstract The Australian Bureau of Meteorology delivers a suite of operational forecast services for water-dependent decision makers, one customer group uses forecasts to inform environmental water delivery. Significant environmental sites in the Murray-Darling Basin, Australia’s food bowl, have suffered due to allocation of water to irrigated agriculture. The Enhanced Environmental Water Delivery project is a multi-agency collaboration that aims to coordinate releases of environmental water from storages with natural flow events, thereby achieving ecological outcomes with less environmental water. Forecasts of streamflow and runoff will be critical to maximise outcomes and minimise unintended impacts, such as inundation above agreed limits. Water forecasting in Australia is challenging due to climatical diversity, highly variable rainfall, and ephemeral streams. Overcoming these challenges is important to provide skilful and reliable probabilistic forecasts at a range of temporal and spatial scales. These forecasts will contribute to a more equitable distribution of water resources. Track Coordinator - Simulation Education: Christopher Lynch (Old Dominion University), Krzysztof J. Rechowicz (Old Dominion University) Simulation Education Simulation Education and Gaming Chair: Jayendran Venkateswaran (IIT Bombay); Leonardo Chwif (Escola de Engenharia Mauá, Mauá Institute of Technology) Simulation Teaching during the Pandemic: Report of an Experience in a Higher Education Private Institution Leonardo Chwif (Escola de Engenharia Mauá, Mauá Institute of Technology) and Wilson Pereira (Simulate Simulation Technology) Abstract Abstract The COVID-19 pandemic has impacted virtually every sector of our society, including the educational arena. This article reports the experience of discrete-event simulation teaching as part of the industrial engineering curriculum of a higher education private institution in three time instances: the pre-pandemic period (2019), where teaching was in person; the main pandemic period (2020 and the first half of 2021), where teaching was 100% remote, and the hybrid pandemic period (2nd half of 2021). We conducted comparisons of the teaching process along these instances regarding several points, by performing both qualitative and quantitative analyses. This article concludes that, despite some pedagogical difficulties, it was possible to maintain high quality in the teaching-learning process, compatible with the pre-pandemic period. The article also makes a forecast of how the teaching process of this type of discipline will be in the near future, after having been influenced by the pandemic period. Agent Based Learning Environment for Survey Research Jayendran Venkateswaran, Sayli Shiradkar, and Deepak Choudhary (Indian Institute of Technology Bombay) Abstract Abstract Survey based research methodology is commonly used in various disciplines ranging from social sciences to healthcare. However, considering practical constraints, it is difficult to provide real world experience of survey sampling methodologies to students and novice researchers. In this paper, we propose development of a virtual learning environment based on agent modeling to help learn different aspects and challenges in survey-based research. A study scenario of adoption of an improved cookstove is developed as an agent model, where each household (sample point) is an agent. The agent's behavior is defined using state charts and system dynamics models. The agent-based environment has been used to illustrate various learning points for students and novice researchers. Participatory Simulation to Support Transactional Curriculum Inquiry Robert William Brennan and Peter Goldsmith (University of Calgary) and Nancy Nelson (Conostoga College) Abstract Abstract In this paper, we propose an approach to facilitate the identification of threshold concepts in undergraduate engineering curricula. The approach is based on the framework of transactional curriculum inquiry where educators work with a group of stakeholders (students, curriculum designers, industry practitioners) to identify threshold concepts. Our proposed approach involves developing a participatory simulation using agent-based modeling that will serve as a digital forum for the exploration of threshold concepts in engineering courses. Track Coordinator - Simulation Optimization: Siyang Gao (City University of Hong Kong), Guangxin Jiang (Harbin Institute of Technology, School of Management), Giulia Pedrielli (Arizona State University) Simulation Optimization Estimation Techniques for Simulation Optimization Chair: Susan R. Hunter (Purdue University) Central Limit Theorems for Constructing Confidence Regions in Strictly Convex Multi-Objective Simulation Optimization Susan Hunter and Raghu Pasupathy (Purdue University) Abstract Abstract We consider the context of multiobjective simulation optimization (MOSO) with strictly convex objectives. We show that under certain types of scalarizations, a (1-alpha)-confidence region on the efficient set can be constructed if the scaled error field (over the scalarization parameter) associated with the estimated efficient set converges weakly to a mean-zero Gaussian process. The main result in this paper proves such a "Central Limit Theorem." A corresponding result on the scaled error field of the image of the efficient set also holds, leading to an analogous confidence region on the Pareto set. The suggested confidence regions are still hypothetical in that they may be infinite-dimensional and therefore not computable, an issue under ongoing investigation. Fixed Budget Ranking and Selection with Streaming Input Data Best Contributed Theoretical Paper - Finalist Yuhao Wang and Enlu Zhou (Georgia Institute of Technology) Abstract Abstract We consider a fixed budget ranking and selection problem with input uncertainty, where unknown input distributions can be estimated using input data arriving in batches of varying sizes over time. Each time a batch arrives, the input distribution is updated and additional simulations can be run with a given simulation budget. Within each time stage, we apply the large deviations theory to compute the rate function of the probability of false selection (PFS) with input distribution and formulate an optimization problem to maximize the decaying rate of PFS. With the derived optimality condition, we design a dynamic optimal budget allocation procedure with sequentially updated input distributions under streaming input data. We prove the consistency and asymptotic optimality of the procedure, and numerically show the high efficiency of our procedure compared to the equal allocation rule and a simple extension of the Optimal Computing Budget Allocation (OCBA) algorithm. Policy Evaluation with Stochastic Gradient Estimation Techniques Yi Zhou, Michael Fu, and Ilya Ryzhov (University of Maryland) Abstract Abstract In this paper, we consider policy evaluation in a finite-horizon setting with continuous state variables. The Bellman equation represents the value function as a conditional expectation, which can be further transformed into a ratio of two stochastic gradients. By using the finite difference method and the generalized likelihood ratio method, we propose new estimators for policy evaluation and show how the value of any given state can be estimated using sample paths starting from various other states. Simulation Optimization Advances in Ranking and Selection Chair: Jeff Hong (Fudan University) Non-myopic Knowledge Gradient Policy for Ranking and Selection Kexin Qin (Fudan University), Weiwei Fan (Tongji University), and L. Jeff Hong (Fudan University) Abstract Abstract We consider the ranking and selection (R&S) problem with fixed simulation budget, in which the budget is assumed to be allocated sequentially. Deriving the optimal sampling procedure for this problem amounts to solving a stochastic dynamic program that is highly intractable. To overcome this difficulty, the existing R&S procedures are often designed from a myopic viewpoint. However, these myopic procedures are only single-step optimal and may have a poor performance for general sequential R&S problems. Therefore, in this paper, we combine two popular lookahead strategies and design a non-myopic knowledge-gradient (KG) procedure. Meanwhile, to streamline the computation of procedure, we propose a modified Monte Carlo tree search method specifically designed under the R&S context. We show that the new procedure can exhibit a performance superior to the classic KG. Importance Sampling for Rare-Event Gradient Estimation Yuanlu Bai, Shengyi He, and Henry Lam (Columbia University); Guangxin Jiang (Harbin Institute of Technology); and Michael C. Fu (University of Maryland, College Park) Abstract Abstract Importance sampling (IS) is a powerful tool for rare-event estimation. However, in many settings, we need to estimate not only the performance expectation but also its gradient. In this paper, we build a bridge from the IS for rare-event estimation to gradient estimation. We establish that, for a class of problems, an efficient IS sampler for estimating the probability of the underlying rare event is also efficient for estimating gradients of expectations over the same rare-event set. We show that both the infinitesimal perturbation analysis and the likelihood ratio estimators can be studied under the proposed framework. We use two numerical examples to validate our findings. Thompson Sampling Meets Ranking and Selection Yijie Peng and Gongbo Zhang (Peking University) Abstract Abstract Ranking and selection has been actively studied in simulation. We briefly review the ranking and selection problem and some existing sampling procedures. Thompson sampling is originally proposed for the multi-armed bandits problem, whereas its variant top-two Thompson sampling can perform better in the ranking and selection problem. We compare the top-two Thompson sampling comprehensively with some popular sampling procedures from both theoretical and numerical perspectives. Simulation Optimization Applications of Simulation Optimization Chair: Michael Geurtsen (Eindhoven University of Technology, Nexperia) A Logistic Regression and Linear Programming Approach for Multi-skill Staffing Optimization in Call Centers Thuy Anh Ta (Phenikaa University), Tien Mai (Singapore Management University), and Fabian Bastin and Pierre L'Ecuyer (University of Montreal) Abstract Abstract We study a staffing optimization problem in multi-skill call centers. The objective is to minimize the total cost of agents under some quality of service (QoS) constraints. The key challenge lies in the fact that the QoS functions have no closed-form and need to be approximated by simulation. In this paper, we propose a new way to approximate the QoS functions by logistic functions and design a new algorithm that combines logistic regression, cut generations, and logistic-based local search to efficiently find good staffing solutions. We report computational results using examples up to 65 call types and 89 agent groups showing that our approach performs well in practice, in terms of solution quality and computing time. An Inexact Variance-Reduced Method For Stochastic Quasi-Variational Inequality Problems With An Application In Healthcare Zeinab Alizadeh, Brianna M. Otero, and Afrooz Jalilzadeh (The University of Arizona) Abstract Abstract This paper is focused on a stochastic quasi-variational inequality (SQVI) problem with a continuous and strongly-monotone mapping over a closed and convex set where the projection onto the constraint set may not be easy to compute. We present an inexact variance reduced stochastic scheme to solve SQVI problems and analyzed its convergence rate and oracle complexity. A linear rate of convergence is obtained by progressively increasing sample-size and approximating the projection operator. Moreover, we show how a competition among blood donation organizations can be modeled as an SQVI and we provide some preliminary simulation results to validate our findings. Dynamic Scheduling of Maintenance By A Reinforcement Learning Approach - A Semiconductor Simulation Study Michael Geurtsen (Eindhoven University of Technology, Nexperia) and Ivo Adan and Zumbul Atan (Eindhoven University of Technology) Abstract Abstract Scheduling in a semiconductor back-end factory is an extremely sophisticated and complex task. In semiconductor industry, more often than not, the scheduling of maintenance is underexposed to production scheduling. This is a missed opportunity as maintenance and production activities are deeply intertwined. This study considers the dynamic scheduling of maintenance activities on an assembly line. A policy is constructed to schedule a cleaning activity on the last machine of an assembly line such that the average production rate is maximized. The policy takes into account the given flexibility and the buffer content of the buffers in-between the machines in the assembly line. A Markov Decision Process is formulated for the problem and solved using Value Iteration and Reinforcement Learning Algorithms. In addition, for a real world case study, a simulation analysis is performed to evaluate the potential practical benefits. Simulation Optimization Multiobjective Simulation Optimization Chair: Matthew T. Ford (Cornell University) Optimal Computing Budget Allocation for Multi-Objective Ranking and Selection under Bernoulli Distribution Tianlang Zhao (Nanjing University of Aeronautics and Astronautics) and Xiao Jin and Loo Hay Lee (National University of Singapore) Abstract Abstract This paper studies a multi-objective ranking and selection (MORS) issue with observations following Bernoulli distribution. The Pareto-optimal set is aimed to be selected with each design and performance measure pair being evaluated separately. Our contribution is twofold. (1) We provide a frequentist work under Bernoulli assumption in MORS where a robust asymptotic optimal sampling strategy is derived based on large deviation principle (LDP). (2) From the optimal sampling strategy, we propose a sequential selection procedure, named MOCBA-B. Numerical results based on averaged probability of correct selection (PCS) show that MOCBA-B is significantly superior to equal allocation (EA) and is comparable to the theoretically optimal allocation strategy. Automatic Differentiation for Gradient Estimators in Simulation Matthew T. Ford (Cornell University), David J. Eckman (Texas A&M University), and Shane G. Henderson (Cornell University) Abstract Abstract Automatic differentiation (AD) can provide infinitesimal perturbation analysis (IPA) derivative estimates directly from simulation code. These gradient estimators are simple to obtain analytically, at least in principle, but may be tedious to derive and implement in code. AD software tools aim to ease this workload by requiring little more than writing the simulation code. We review considerations when choosing an AD tool for simulation, demonstrate how to apply some specific AD tools to simulation, and provide insightful experiments highlighting the effects of different choices to be made when applying AD in simulation. Achieving Diversity in Objective Space for Sample-efficient Search of Multiobjective Optimization Problems Eric Hans Lee, Bolong Cheng, and Michael McCourt (Intel Corporation) Abstract Abstract Efficiently solving multi-objective optimization problems for simulation optimization of important scientific and engineering applications such as materials design is becoming an increasingly important research topic. This is due largely to the expensive costs associated with said applications, and the resulting need for sample-efficient, multiobjective optimization methods that efficiently explore the Pareto frontier to expose a promising set of design solutions. We propose moving away from using explicit optimization to identify the Pareto frontier and instead suggest searching for a diverse set of outcomes that satisfy user-specified performance criteria. This presents decision makers with a robust pool of promising design decisions and helps them better understand the space of good solutions. To achieve this outcome, we introduce the Likelihood of Metric Satisfaction (LMS) acquisition function, analyze its behavior and properties, and demonstrate its viability on various problems. Simulation Optimization Search-based Simulation Optimization Algorithms I Chair: Michael Choi (Yale-NUS College and Department of Statistics and Data Science, National University of Singapore) Object-oriented Implementation and Parallelization of the Rapid Gaussian Markov Improvement Algorithm Mark Semelhago (Amazon.com), Barry L. Nelson (Northwestern University), Eunhye Song (Pennsylvania State University), and Andreas Waechter (Northwestern University) Abstract Abstract The Rapid Gaussian Markov Improvement Algorithm (rGMIA) solves discrete optimization via simulation problems by using a Gaussian Markov random field and complete expected improvement as the sampling and stopping criterion. rGMIA has been created as a sequential sampling procedure run on a single processor. In this paper, we extend rGMIA to a parallel computing environment when $q+1$ solutions can be simulated in parallel. To this end, we introduce the $q$-point complete expected improvement criterion to determine a batch of $q+1$ solutions to simulate. This new criterion is implemented in a new object-oriented rGMIA package. Multi-fidelity Discrete Optimization via Simulation Dongyang Li, Haitao Liu, Xiao Jin, Haobin Li, Ek Peng Chew, and Kok Choon Tan (National University of Singapore) and Yun Hui Lin (University College Dublin) Abstract Abstract Digital simulation is a prevalent tool to evaluate the performance of complex industrial systems. In this paper, we consider the discrete optimization via simulation (DOvS) problems, where the design space is an integer lattice. In addition, we are interested in leveraging cheap low-fidelity models to enhance optimization efficiency. An innovative Gaussian Markov random field (GMRF), which adaptively evolves along with the optimization process, is introduced to exploit the spatial and inter-model relationships among the objective function values of designs. We then propose the Multi-Fidelity Gaussian Markov Improvement Algorithm (MFGMIA) under the Bayesian global optimization framework. The numerical experiments show that it can achieve significant performance improvement by properly using low-fidelity information. An Empirical Review of Model-based Adaptive Sampling for Global Optimization of Expensive Black-box Functions Nazanin Nezami and Hadis Anahideh (University of Illinois Chicago) Abstract Abstract This paper reviews the state-of-the-art model-based adaptive sampling approaches for single-objective black-box optimization (BBO). While BBO literature includes various promising sampling techniques, there is still a lack of comprehensive investigations of the existing research across the vast scope of BBO problems. We first classify BBO problems into two categories: engineering design and algorithm design optimization and discuss their challenges. We then critically discuss and analyze the adaptive model-based sampling techniques focusing on key acquisition functions. We elaborate on the shortcomings of the variance-based sampling techniques for engineering design problems. Moreover, we provide in-depth insights on the impact of the discretization schemes on the performance of acquisition functions. We emphasize the importance of dynamic discretization for distance-based exploration and introduce EEPA+, an improved variant of a previously proposed Pareto-based sampling technique. Our empirical analyses reveal the effectiveness of variance-based techniques for algorithm design and distance-based methods for engineering design optimization problems. Simulation Optimization Search-based Simulation Optimization Algorithms II Chair: Mark Semelhago (Amazon.com) Bandit-Based Multi-Start Strategies for Global Continuous Optimization Phillip Huang Guo and Michael C. Fu (University of Maryland) Abstract Abstract Global continuous optimization problems are often characterized by the existence of multiple local optima. For minimization problems, to avoid settling in suboptimal local minima, optimization algorithms can start multiple instances of gradient descent in different initial positions, known as a multi-start strategy. One key aspect in a multi-start strategy is the allocation of gradient descent steps as resources to promising instances. We propose new strategies for allocating computational resources, developed for parallel computing but applicable in single-processor optimization. Specifically, we formulate multi-start as a Multi-Armed Bandit (MAB) problem, viewing different instances to be searched as different arms to be pulled. We present reward models that make multi-start compatible with existing MAB and Ranking and Selection (R&S) procedures for allocating gradient descent steps. We conduct simulation experiments on synthetic functions in multiple dimensions and find that our allocation strategies outperform other strategies in the literature for deterministic and stochastic functions. Landscape Modification Meets Surrogate Optimization: Towards Developing an Improved Stochastic Response Surface Method Michael Choi (Yale-NUS College and Department of Statistics and Data Science, National University of Singapore) and Venkatkrishna Karumanchi (Yale-NUS College) Abstract Abstract In global optimization, surrogate optimization algorithms such as the Stochastic Response Surface (SRS) method are often employed when the objective function is expensive to evaluate or when the gradient information is unavailable. The aim of this paper is to propose and analyze an improved SRS method that instead targets a transformed objective function. The core idea of the transformation rests on introducing a threshold parameter in which the landscape is modified when the algorithm is above this threshold, making the algorithm easier to climb out of a local minimum basin while preserving the set of the stationary points. We prove the long-time convergence of the proposed improved SRS method, and provide positive numerical results on some common global optimization benchmark functions which demonstrate the improved convergence of the proposed method. We stress that the proposed method can be implemented with minimal additional computational costs. Simulation-Based Sets of Similar-Performing Actions in Finite Markov Decision Process Models Wesley Marrero (Dartmouth College) Abstract Abstract Markov decision process (MDP) models have been used to evaluate the performance of policies in various domains, such as treatment planning in medical decision making. However, in practice, decision makers may prefer other strategies that are not statistically different from the actions in their initial policy of interest. To allow for decision makers' expertise and provide flexibility in implementing policies, this paper introduces a new framework for identifying sets of similar-performing actions in finite MDP models. These sets are obtained by combining a simulation-based dynamic programming algorithm for policy evaluation with a simulation-based statistical multiple comparisons procedure. The framework in this paper is applied in a medical decision-making setting to find sets of similar-performing antihypertensive treatment choices for a set of clinically representative patient profiles. Financial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) A Proximal Algorithm for Sampling from Non-smooth Potentials Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) and Yongxin Chen (Georgia Institute of Technology) Abstract Abstract In this work, we examine sampling problems with non-smooth potentials and propose a novel Markov chain Monte Carlo algorithm for it. We provide a non-asymptotical analysis of our algorithm and establish a polynomial-time complexity $\tilde {\cal O}(M^2 d {\cal M}_4^{1/2} \varepsilon^{-1})$ to achieve $\varepsilon$ error in terms of total variation distance to a log-concave target density with 4th moment ${\cal M}_4$ and $M$-Lipschitz potential, better than most existing results under the same assumptions. Our method is based on the proximal bundle method and an alternating sampling framework. The latter framework requires the so-called restricted Gaussian oracle, which can be viewed as a sampling counterpart of the proximal mapping in convex optimization. One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant. A Dynamic Credibility Model with Self-excitation and Exponential Decay Himchan Jeong (Simon Fraser University) and BIN ZOU (University of Connecticut) Abstract Abstract This paper proposes a dynamic credibility model for claim count that extends the benchmark Poisson generalized linear models (GLM) by incorporating self-excitation and exponential decay features from Hawkes processes. Under the proposed model, a recent claim has a bigger impact on the credibility premium than an outdated claim. Empirical results show that the proposed model outperforms the Poisson GLM in both in-sample goodness-of-fit and out-of-sample prediction. A Computational Study of Probabilistic Branch and Bound with Multilevel Importance Sampling Hao Huang (Yuan Ze University); Pariyakorn Maneekul, Danielle F. Morey, and Zelda B. Zabinsky (University of Washington); and Giulia Pedrielli (Arizona State University) Abstract Abstract Probabilistic branch and bound (PBnB) is a partition-based algorithm developed for level set approximation, where investigating all subregions simultaneously is a computational costly sampling scheme. In this study, we hypothesize that focusing branching and sampling on promising subregions will improve the efficiency of the PBnB algorithm. Two variations to Original PBnB are proposed, Multilevel PBnB and Multilevel PBnB with Importance Sampling. Multilevel PBnB focuses its branching on promising subregions that are likely to be maintained or pruned, as opposed to Original PBnB that branches more subregions. Multilevel PBnB with Importance Sampling attempts to further improve this efficiently by combining focused branching with a posterior distribution that updates iteratively. We present numerical experiments using benchmark functions to compare the performance of each PBnB variation. Track Coordinator - Vendor Tracks: Ernie Lee, Chao Meng (University of Southern Mississippi), David T. Sturrock (Simio LLC), Edward Williams (PMC) Vendor Build, Deploy, Use, and Maintain Simulations with AnyLogic 9 Andrei Borshchev and Sergey Suslov (The AnyLogic Company) Abstract Abstract AnyLogic 9, our new innovative product, radically extends the lifecycle of simulation models. A typical model’s lifetime is limited to the duration of a project, but with AnyLogic 9 technology, simulation becomes a truly operational component integrated into the company workflow: always ready to run, always editable, exposing standard open API, and not requiring any installation on the developer’s or end user’s machines. Digital twinning, operational decision support tools, AI training and testing – all kinds of enterprise simulation applications are naturally supported by AnyLogic 9. During our presentation, we will give a live demonstration of the creation and operational usage of AnyLogic 9 simulation models, as well as how to ensure they are constantly kept up to date. Vendor Cloud-based Supply Chain Planning, Connect to Innovation at Scale: Mozart Cloud Yong H. Chung, Gu-Hwan Chung, Byung-Hee Kim, and Keyhoon Ko (VMS Solutions Co., Ltd) Abstract Abstract MOZART has improved the supply chain planning excellence for the global semiconductor and display manufacturing leaders over the two decades. The integrated solution for development and operations allows for effective production planning and scheduling systems to be implemented; leveraging pre-built libraries and capturing all rules and constraints. The customized application runs with real-time data, producing accurate end results. Its simulation tool utilizes trained machine learning models for smart manufacturing operations. Based on our extensive experience, VMS introduces MOZART Cloud featuring quick and easy software implementation, high-quality planning output, and stable system operation. Users can analyze the planning result with powerful embedded user interfaces which use the predetermined input and output schema. Since pre-built functions and rules cover most of requirements which frequently asked in manufacturing companies, users may select rules and options for their own purpose easily. It has successfully been implemented in automotive, electronics, machineries, and bio-chemical industries. Vendor Using Reward Systems in Flexsim to Train AI Yin Kai Chan (Advent2Labs Singapore) and Bill Nordgren (FlexSim Software Products, Inc.) Abstract Abstract FlexSim is dedicated to developing the most capable and useable 3D simulation modeling and analysis software. With the rapid advances in artificial intelligence and machine learning (AI/ML), simulation has become the keystone for a variety of advanced modeling and optimization applications. This year, we’d like to demonstrate our latest capabilities in AI/ML, including a Reinforcement Learning tool, Python integration, and a partnership with Microsoft’s Project Bonsai. Using a model and case study that was developed for graduate students at BYU, we will dig into these new features and show how FlexSim can be used for advanced simulation applications in the areas of AI/ML and Digital Twin. Vendor Simulation Solutions and Applications for Complex One-of-a-Kind Production Processes Based on STS Library Dirk Steinhauer (SimPlan AG) Abstract Abstract There are various and specific challenges managing one-of-a-kind processes in the industry. Based on STS Library for complex production and logistics, SimPlan AG has implemented several simulation solutions for one-of-a-kind production supporting not only the strategic production development but also operative planning and control. Track Coordinator - Ph.D. Colloquium: Anatoli Djanatliev (University of Erlangen-Nuremberg), Siyang Gao (City University of Hong Kong), Chang-Han Rhee (Northwestern University), Cristina Ruiz-Martín (Carleton University) PhD Colloquium PhD Colloquium Supporting Automated Warehouses with Data-Driven Modelling Andrea Ferrari (Politecnico di Torino) Abstract Abstract With the recent increased pressure on supply chains, Automated Storage and Retrieval Systems (AS/RS) play an key role in improving the efficiency of logistics processes. Data-driven simulation modelling coupled with Machine Learning (ML) algorithms might represents an effective approach to anticipate problems that may occur in the warehousing processes. Thus, this PhD project aims to develop an advanced data-driven simulation model for supporting the decision-making process related to AS/RS operations. To this end, the objectives are to evaluate the state-of-art of AS/RS simulation, develop and validate a data-driven simulation model, and integrate it with a ML algorithm to reduce the cycle time. Finally, potential application of this approach will be tested in different operational settings. Emergency Vehicle Preemption Strategies: a Microsimulation Based Case Study on a Smart Signalized Corridor Somdut Roy (Georgia Institute of Technology) Abstract Abstract Emergency-Response-Vehicles (ERVs), like firetrucks operate with the purpose of saving lives and mitigating property damage. Emergency-vehicle-preemption (EVP) is implemented to provide the right-of-way to ERVs by displaying the green indications along the ERV route. Testing on a signalized corridor in Georgia, this study proposes a strategy, “Dynamic-Preemption”, which utilizes Connected-Vehicle technology to detect the need for preemption prior to the ERV reaching the vicinity of the intersection in real-time. To choose the best EVP strategy, several KPIs are measured for: (a) the ERV route, which is chosen to be along the mainline for this case study, and (b) the side streets, which are adversely affected by EVP. The study tests different strategies under varying scenarios and provides a methodology for selecting the most favorable one. It was observed that, there is a potential for ERV travel time improvements on the order of 25% with minimal impact to the conflicting traffic. Impact of Earlier Boosters and Pediatric Vaccines on Covid-19 Erik Rosenstrom (North Carolina State University) Abstract Abstract The objective is to evaluate the impact of the earlier availability of COVID-19 vaccinations to children and boosters to adults in the face of the Delta and Omicron variants. We employed an agent-based stochastic network simulation model with a modified SEIR compartment model populated with demographic and census data for North Carolina. We found that earlier availability of childhood vaccines and earlier availability of adult boosters could have reduced the peak hospitalizations of the Delta wave by 10% and the Omicron wave by 42%, and could have reduced cumulative deaths by 9% by July 2022. When studied separately, we found that earlier childhood vaccinations reduce cumulative deaths by 2,611 more than earlier adult boosters. Therefore, the results of our simulation model suggest that the timing of childhood vaccination and booster efforts could have resulted in a reduced disease burden and that prioritizing childhood vaccinations would most effectively reduce disease spread. Use of Social Determinants of Health in Agent-based Models for Early Detection of Cervical Cancer Juan Fernando Galindo Jaramillo (UNICAMP, University of Campinas) Abstract Abstract Cervical cancer is a treatable disease when detected at an early stage. Yet, in Brazil, most cases are detected at an advanced stage due to social conditions that impede periodical screening. The objective of this project is to test strategies for maximizing early detection of cervical cancer. An Agent-Based Model using social determinants of health is being created to simulate the conditions in which women live. Then, with Reinforcement Learning techniques, several strategies are tested. Early results show how through the improvement in schooling and income levels improve early-stage detection. These results may indicate the importance of improving social conditions to improve overall prevention. The Interplay of Optimization and Statistics to Solve Large-scale Black-box Noisy Functions Pariyakorn Maneekul (University of Washington) Abstract Abstract The main research focus is to develop optimization methodologies that integrate with statistics for solving black-box noisy functions with discrete-time performance analysis. The proposed research will improve the efficiency of a partition-based optimization method, Probabilistic Branch and Bound (PBnB) for level set approximation. Probabilistic Branch and Bound approximates a level set by branching subregions and statistically classifying them as maintained (inside the level set) or pruned (no intersection with the level set). We propose the multi-level PBnB that uses importance sampling to identify promising subregions for classification compared with uniform sampling in the original PBnB. We also incorporate Gaussian processes as a surrogate model to guide the importance sampling and aid in classifying subregions. We present the proposed algorithms with a finite-time performance analysis in terms of incorrect pruning and maintaining of subregions of the solution space. Numerical experiments are presented and compared to the analytical results for demonstration purposes. A Simulation-Optimization Framework To Improve The Organ Transplantation Offering System Ignacio Ismael Erazo Neira (Georgia Institute of Technology) Abstract Abstract We propose a simulation-optimization-based methodology to improve the way that organ transplant offers are made to potential recipients. Our policy can be applied to all types of organs, is implemented starting at the local level, is flexible with respect to simultaneous offers of an organ to multiple patients, and takes into account the quality of the organs under consideration. We describe in detail our simulation-optimization procedure and how it uses data from the Organ Procurement and Transplantation Network and the Scientific Registry of Transplant Recipients to inform the decision-making process. In particular, the optimal batch size of offers is determined as a function of location and certain organ attributes. We present results using our liver and kidney models, where we show that, under our policy recommendations, more organs are utilized and the required times to allocate the organs are reduced over the one-at-a-time offer policy currently in place. Fast Approximation to Discrete-event Simulation of Queueing Networks Tan Wang (Fudan University) Abstract Abstract The simulation of queueing system is generally carried out by discrete-event simulation, which updates the simulation time with the occurrence time of the next event. However, for large-scale queueing network, especially when the network is very busy, keeping track of all the events takes a long time and it is difficult to harness the capability of parallel computing. In this paper, we propose an innovative fast simulation approximation framework for large-scale queueing network, where time is broken up into small time intervals and the system state is updated according to the events happening in each time interval. The computational complexity analysis demonstrates that our method is much more efficient for large-scale networks, compared with discrete-event simulation. In addition, we theoretically deduce the approximation error bound of the proposed algorithms. The experimental results show that our framework can be thousands times faster than the-state-of-the-art discrete-event simulation tools. Simulation of Cooperative Downloading in Python Michael Niebisch (FAU Erlangen-Nuremberg) Abstract Abstract With the deployment of more software in vehicles, the need for efficient and cost-effective updates arises. One method to face this need is cooperative downloading, which enables vehicles to exchange parts of an update with each other, to collect a full set. Thereby, simulation of applicable strategies is crucial, as these can influence the efficiency of such systems, however, state of the art network simulators are not build for large scale test scenarios with long simulated time-spans. We introduce Cooperative Downloading in Python (CoDiPy), a framework for efficient analysis of cooperative downloading, while improving execution time. With CoDiPy, we have investigated relevant topics like encoding, communication strategies and cost optimization. Does Financial Quantitative Easing Help Alleviate the Economic Disparity?: Results of Agent-based Simulation Models Tae-Sub Yun (KAIST) Abstract Abstract I verified the effect of financial quantitative easing on economic inequality from two agent-based simulation models. First, I examined the effect of housing finance on wealth inequality by the Korean housing market model. Second, I analyzed the effect of corporate finance on income inequality in the Korean macroeconomic model. I confirmed that quantitative easing of finance exacerbate inequality in the both models. Also, it implies that adequate financial regulation can mitigate the adverse effects of finance. Simulations for Optimizing Dispatching Strategies in Semiconductor Fabs Using Machine Learning Techniques Benjamin Kovács (Alpen-Adria-Universität Klagenfurt) Abstract Abstract Optimizing operations of semiconductor manufacturing plants is a tremendous challenge due to the complexity and scale of real-world problem instances. Simulations are widely used for prototyping, evaluating, comparing, and verifying new control strategies, reducing the costs, risks, and time required for the development. We present an open-source, customizable discrete-event simulator tool for the semiconductor industry, simulating real-world-scale problems based on open datasets incorporating the challenging characteristics and constraints of the process. The simulator provides a general, customizable interface to allow benchmarking of various methods. A reinforcement learning environment is also bundled with the toolbox. Using our software, we develop evolutionary algorithms and reinforcement learning-based dispatching strategies and compare them to heuristics widely adapted in the industry. Toward developing an agent-based model of European pharmaceutical trade market Ruhollah Jamali (University of Southern Denmark) Abstract Abstract Agent-based modeling and simulation offer a bottom-up platform to research the European pharmaceutical trading market and enable us to investigate macroeconomic dynamics emerging from microeconomics behaviors and interactions of players in the market. Players such as manufacturers, wholesalers, parallel traders, and hospitals are involved in the pharmaceutical trading market, and their activities in the market are called microeconomics behaviors and interactions. The main objective of this work is to develop an agent-based model of the pharmaceutical trading market based on an available game-theoretic model of the market and discuss applications of the represented model for economists and players in this market to study and predict the market. Rare-Event Simulation without Variance Reduction: An Extreme Value Theory Approach Yuanlu Bai (Columbia University) Abstract Abstract In estimating probabilities of rare events, crude Monte Carlo (MC) simulation is inefficient which motivates the use of variance reduction techniques. However, these latter schemes rely heavily on delicate analyses of underlying simulation models, which are not always easy or even possible. We propose the use of extreme value analysis, in particular the peak-over-threshold (POT) method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude MC samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that our POT estimator appears more accurate than crude MC and, while crude MC can easily give a trivial probability estimate 0, POT outputs a non-trivial estimate with a roughly correct magnitude. Therefore, in the absence of efficient variance reduction schemes, POT appears to offer potential benefits to enhance crude MC estimates. Modeling Dental Caries Prevention Care in Polish Primary Schools: a Hybrid Simulation Approach Maria Hajłasz (Wrocław University of Science and Technology) Abstract Abstract The aim of the study is to develop a universal hybrid approach based on discrete event simulation (DES) and agent-based simulation (ABS) to support decision makers in the planning of dental caries prevention programs in Polish primary schools. DES makes it possible to observe students during their education in primary school and changes that occur in their oral health. ABS is responsible for tracking changes in students' attitudes toward oral health care under the influence of close surroundings. The proposed hybrid model can support decisions regarding the planning of the amount and intensity of dental caries prevention services, as well as the identification of the human resources needed to provide these services. The research findings are promising, and the presented approach allows to check different preventive care scenarios, which can ultimately lead to the indication of the number of specialists necessary to provide services to a specific group of students. Agent-based Modelling of Farmers’ Climate-resilient Crop Choice in the Upper Mekong Delta of Vietnam Thi Ha Lien Le (University of New England) Abstract Abstract Flood-adaptive crops are desired alternatives in the flood zone areas of the Mekong River Delta of Vietnam in order to lessen the negative impacts on the environment caused by widespread use of high dyke systems and triple rice monocultures. This study uses an agent-based modelling system to gain insight into the factors driving farmers’ decisions to switch between triple rice monocultures and other flood adaptive and resilient crops. It was found that the influential determinants are dyke construction, labor availability, perception of environmental sustainability, knowledge about new low-dyke alternatives, availability of collateral to access credit, and risk preference. It is suggested that in order to facilitate sustainable transformation of this area in the future, the government should move away from high dyke construction and focus on raising awareness and perception, promoting agricultural mechanization, improving credit access, and developing agricultural insurance and risk management schemes. A system simulation-based approach for sustainability evaluation and benchmarking of buildings ANN FRANCIS (Indian Institute of Technology Bombay) Abstract Abstract The building sector significantly impacts the economic, social, and environmental dimensions of sustainability. However, the building sector lacks a suitable sustainability assessment and benchmarking mechanism that evaluates the tradeoffs among the three pillars while incorporating the time-induced changes in building characteristics. Hence, systems thinking that accounts for complex systemic behavior is ideal for solving sustainability problems. Therefore, the research proposes a methodological framework for benchmarking the sustainability of buildings using system dynamics modeling and simulation. System simulation enables forecasting the sustainability performance of a building while evaluating numerous improvement scenarios as well. Furthermore, a vast dataset is generated through a series of simulations of numerous building types. This dataset is then used to develop a benchmarking scale against which the performance of different buildings is defined. Such a simulation-based framework would enable overcoming the challenges associated with the data-intensive, multi-faceted and complex nature of building sustainability evaluation. Supervised Machine Learning for Effective Missile Launch Based on Beyond Visual Range Air Combat Simulations Joao Dantas (Institute for Advanced Studies) Abstract Abstract This work compares supervised machine learning methods using reliable data from beyond visual range air combat constructive simulations to estimate the most effective moment for launching missiles. We employed resampling techniques to improve the predictive model, and we could identify the remarkable performance of the models based on decision trees and the significant sensitivity of other algorithms. The models with the best f1-score brought values of 0.379 and 0.465 without and with the resampling technique, respectively, which is an increase of 22.69%, and with an appropriate time inference. Thus, if desirable, resampling techniques can improve the model's recall and f1-score with a slight decline in accuracy and precision. Therefore, through data obtained through constructive simulations, it is possible to develop decision support tools based on machine learning models, which may improve the flight quality in air combat, increasing the effectiveness of offensive missions to hit a particular target. Design space specification, exploration, and simulation for production systems Nick Paape (Eindhoven University of Technology) Abstract Abstract Designing a production system using simulation can be a challenging cycle of specifying, modelling, simulating and evaluating the performance of each (re)design. This extended abstract presents a framework for automated design space exploration of production systems. The framework includes a formal design space specification language for production systems which can be used to generate potential designs, and a method for automated exploration and analysis of the specified design space using simulation. An Approach to Population Synthesis of Engineering Students for Understanding Dropout Risk Danika Dorris (North Carolina State University) Abstract Abstract Dropping out of STEM remains a critical issue today, and it would be useful for universities to have reliable predictive models to detect students’ dropout risks. Generating a synthetic population of the true population could be useful for simulating the system and testing scenarios. We outline an approach for creating a synthetic population of students in STEM using Bayesian Networks and build a microsimulation which simulates students’ risk behaviors over time using Dynamic Logistic Regression prediction models. This process has identified several areas that must be addressed before the synthetic population represents the true population in a simulation. Real-Time Discrete Event Simulation of Production Processes for Data-Based Construction Management Manuel Jungmann (Technische Universität Berlin) Abstract Abstract The construction industry is characterized by a low level of digitalization and productivity. As construction works are dynamic, unique, and executed outdoors, the management of these processes is complex. Due to advancements in technology, real-time data can be collected during construction execution. These real-time data can be analyzed by machine learning to determine reliable activity durations. Based on the durations, stochastic probability modeling is used to find suitable probability density functions as input parameters for data-driven discrete event simulation. A calibrated discrete event simulation tool is developed for real-time, data-based management of ongoing production processes. The tool estimates the effects of management decisions on key performance indicators and the inclusion of lean construction principles is possible. Additionally, risks, such as weather conditions, can be included in the tool. Hence, it is shown how real-time discrete event simulation enables data-driven decision-making for improved construction management. Poster Madness Poster Madness Chair: Cristina Ruiz-Martín (Carleton University) Quality Driven Transport Strategies for the Wood Supply Chain Christoph Kogler (University of Natural Resources and Life Sciences Vienna) Abstract Abstract Fresh sawlogs lose quality during storage at roadside primarily through blue stain and insect infestation leading to wood value loss. The potential of wood quality forecasting to prioritize wood piles at risk of devaluation for transport has not been evaluated thus far. Consequently, a virtual wood supply chain environment containing dynamic altitude zone-based risk forecasts for blue stain and insect infestations was developed. Based on a discrete event simulation model, unimodal and multimodal wood supply chains were simulated to track sawlog quality development from roadside stocks to the wood-based industry. This enabled identifying and modelling the relationship between lead time and wood quality devaluation as well as applying this knowledge to evaluate innovative transport strategies. Respective regression analyses showed that the procurement lead time is a significant predictor of the downgraded wood amount, explaining over 98% of the variance of downgraded wood in quadratic and cubic relations. Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs Yifan Lin, Yuhao Wang, and Enlu Zhou (Georgia Institute of Technology) Abstract Abstract In this paper we consider the contextual multi-armed bandit problem for linear payoffs under a risk-averse mean-variance criterion. We apply the Thompson Sampling algorithm for the disjoint model, and provide a comprehensive regret analysis for a variant of the proposed algorithm. For $T$ rounds, $K$ actions, and $d$-dimensional contexts, we prove a regret bound of $O((1+\rho+\frac{1}{\rho}) d\ln T \ln \frac{K}{\delta}\sqrt{d K T^{1+2\epsilon} \ln \frac{K}{\delta} \frac{1}{\epsilon}})$ that holds with probability $1-\delta$ under the mean-variance criterion with risk tolerance $\rho$, for any $0<\epsilon<\frac{1}{2}$, $0<\delta<1$. The empirical performance of the algorithms is demonstrated via a portfolio selection problem. Dispatching in Real Frontend Fabs With Industrial Grade Discrete-Event Simulations by Use of Deep Reinforcement Learning Patrick Stöckermann (Infineon Technologies AG, University of Klagenfurt) Abstract Abstract Optimization of lot dispatching in semiconductor manufacturing is essential for time and cost efficient production. In contrast to constraint, mixed-integer, or answer set programming approaches to the problem, Reinforcement Learning (RL) is not limited to local optimization and ideally generalizes in similar domains. Although there are numerous publications utilizing RL, most are applied to small idealized test instances which are not realistic. We propose a Deep Reinforcement Learning (DRL) approach that is applied to real world scenarios. We utilize an industry grade discrete-event simulation as a training and testing environment in order to find global optima for the average tardiness and the Flow Factor (FF). Due to the current limitations of the computing hardware, no relevant results could be obtained yet. However, we expect to improve these measures compared to the highly realistic dispatching rules modelled in the simulation environment by controlling generally acknowledged high impact work centers. Charging Schedule Problem of Electric Buses Using Discrete Event Simulation with Different Charging Rules Chun-Chih Chiu (National Chin-Yi University of Technology) and I-Ching Lin and Ching-Fu Chen (National Cheng Kung University) Abstract Abstract Urban air quality is a global problem because of emissions from vehicles. Thus, many bus operators are switching to electric buses in order to improve the air quality on their bus network. Electric buses have more complicated scheduling aspects than traditional internal combustion engine buses, including battery recharging. Although most studies use a mathematical model, the model usually considers simplified operating rules in the deterministic environment to achieve a single objective. The long waiting time in a recharging process affects the resting time and satisfaction of drivers. An essential operational concern in the charging schedule problem is how to reduce the waiting time. We propose a hybrid algorithm, which is a simplified swarm optimization with a dynamic rule in a discrete event simulation model for improving the solution quality. The proposed method for improving the waiting time of electric buses is demonstrated through a case study of a bus company. Simulation-based Optimization For Operational Excellence Of A Digital Picking System In A Warehouse Junyong So and Youngjin Kim (Dongguk University-Seoul); Byoungsuk Ji, Seongrok Hong, and Seungwoo Jeon (KT R&D Center); and Sojung Kim (Dongguk University-Seoul) Abstract Abstract This study aims at introducing a simulation-based optimization for operational excellence of a digital picking system (DPS) in a warehouse in terms of productivity. The DPS is a well-known system used for picking multiple items based on given inventory information (i.e. quantity, type, and picking order of items) without using a paper. Since its performance is heavily dependent on item locations (or DPS segmentation) and a labor schedule associated with dynamic interactions, this study adopts simulation-based optimization to identify the optimal design of a DPS regarding its labor schedule under realistic conditions of a retailer-driven commodity chain. The proposed approach is implemented in AnyLogic® 8.7.11 simulation software with inventory data of a warehouse in South Korea. In addition, OptQuest® is used as the optimizer to resolve the optimization problem via Metaheuristic for operational excellence of the subject facility. Leveraging Causal Discovery Methods to Enhance Passenger Experience at Airport - An Analysis Method for Agent-based Simulators Shuang Chang, Takashi Kato, Koji Maruhashi, and Kotaro Ohori (Fujitsu Ltd.) Abstract Abstract Passenger experience of non-aeronautical activities has become an important concern for designing and managing airports. There has been an increasing interest on developing agent-based simulators to facilitate the design for enhancing passenger experience. Coupled with such simulators, systematic and effective analysis methods which can guide the design based on causal explanations of simulation outputs are critical yet challenging. In this work, we propose such an analysis method leveraging causal discovery technologies and integrate it to an agent-based simulator, which is developed for signage system design by modeling passengers’ realistic routing behaviors in a virtual airport terminal. By systematically discovering the causal relations among signage system, airport environment, passengers’ characteristics, and their experience under different situations, the proposed analysis method can strengthen the explanation of simulation results and inform indirect control policies leading to positive passenger experience of non-aeronautical activities. Developing Tree Planting Robots with Help of Simulation Jussi Manner and Anders Eriksson (Skogforsk) and Back Tomas Ersson (Swedish University of Agricultural Sciences) Abstract Abstract Because labor costs for manual tree planting are steadily increasing, tree planting robots are of interest. We created a general simulation model to analyze the planting results of conceptual tree planting robots. The results show that robots can plant adequate number of seedlings per ha, but the environmental impact may become a problem if too large digging tools are used during the soil preparation phase. Machine Learning-based Uncertainty Prediction for Efficient Global Optimization Zizhou Ouyang and Belen Martin-Barragan (University of Edinburgh) and Xuefei Lu (Université Côte d'Azur) Abstract Abstract Efficient global optimization (EGO) is widely used to solve black-box expensive-to-evaluate objectives. However, when functions are scaled to high dimensions, classical EGO meets problems as it requires considerable computational overhead (i.e., the huge computation cost of matrix inversion in Kriging). To address this challenge, we propose a machine learning-based uncertainty prediction method as an emerging alternative that replaces Kriging with a machine learning model and uncertainty prediction estimation. The proposed uncertainty prediction method can be applied to most discriminative machine learning models to solve global optimization problems. This study presents an example that leverages the discriminative super vector machine (SVM) to build and update the metamodel. A score function based on metamodel prediction, and an uncertainty estimation method are applied to balance exploration and exploitation. Numerical results show that our method performs well in high-dimensional benchmark test functions. Simulation-based optimization framework for third level of digital twin Seon Han Choi (Ewha Womans University) and Changbeom Choi (Hanbat National University) Abstract Abstract This paper proposes a simulation-based optimization (SBO) framework for a third level of digital twin (DT3). Efficient SBO is necessary for DT3 which optimally controls the corresponding physical system based on the synchronized simulation model in real-time. To this end, this framework consists of three parts: a preprocessor to reduce the search space, an SBO algorithm to decrease the number of simulation replications, and a distributed/parallel simulation environment to shorten the execution time of replications. The framework suggests appropriate SBO algorithms in consideration of the characteristics of a simulation model, search space, and optimization objective so that practitioners can easily apply DT3 to various fields. Simulation and AI in Manufacturing Shashank Siddapur, Ankita Prasad, and Jyoti More (John Deere India Pvt Ltd) Abstract Abstract Simulation has been used as a strategic decision support tool for a long time now. The technologies like Machine Learning and Artificial Intelligence have proven to be a strong contender for decision making as well. Simulation can provide data for training an AI policy in industrial areas where data availability and scenarios can pose a challenge. We introduce use cases in the field of manufacturing where simulation-based AI policies can serve the purpose better, considering the dynamic nature of the system. Synchronous Manufacturing Simulation for Real Time Decision Making Nitin Sharan, Sudhir Pandey, and Jyoti More (John Deere India Pvt. Ltd) Abstract Abstract The engineering applications in Product Lifecycle are becoming entwined with real time data, enabling faster and easier communication among global manufacturing organizations. In the age of the Industrial Internet of Things (IIoT), production resources on the shop floor are more connected than ever. Integrating IIOT with Discrete Event Simulation (DES) will help us to enhance manufacturing processes and gain insights on the potential pain areas of the system in real time. Integration of Discrete-event Simulation in the Planning of a Hydrogen Electrolyzer Production Facility Stefan Galka and Lukas Schuhegger (Ostbayerische Technische Hochschule Regensburg (OTH)) Abstract Abstract In the context of production and factory planning, the expansion of the factory must already be taken into account during initial planning. This results in an increase in planning complexity, as the involved planners have to know the expansion stages of the factory in the different time periods and have to evaluate concept modifications across all time periods. This paper presents an idea for a planning tool, which takes expansion stages into consideration. The data model contains all relevant information to generate a simulation model of the factory in an almost automated way. The aim is to enable factory planners to quickly investigate concept changes with the help of simulation, for example, to identify bottlenecks. Effect of the Private Brand on the Game Model of Food Supply Chain Natsuki Morita, Hayato Dan, and Hitoshi Yanami (Fujitsu Limited); Shota Suginouchi (Aoyama Gakuin University); Mizuho Sato (Tokyo University of Agriculture); Hajime Mizuyama (Aoyama Gakuin University); and Masatoshi Ogawa (Fujitsu Limited) Abstract Abstract The reduction of food loss and decision regarding whether to adopt private brand (PB) are critical issues for food supply chain (SC) management. In this study, the effect of PB introduction to the food SC was examined. Initially, the stakeholders’ decision-making in the food SC was modeled as a game with manufacturers and supermarkets considered as game players. The Japanese milk SC was considered as a case study. Next, using the proposed method, the equilibrium states of the game were efficiently searched and compared for cases with and without PB introduction. In our experiments, the total profit of the food SC increased, total amount of waste decreased, and consumer utility increased by introducing PB. Maintenance Decision-Making Supported by a Multi-fidelity Simulation Optimization Framework Yiyun Cao, Christine Currie, and Stephan Onggo (University of Southampton) and Russell R. Barton (The Pennsylvania State University) Abstract Abstract Digital twin technology is becoming more prevalent in manufacturing but if simulation is to be used to make real-time decisions, there is a need to improve the efficiency of simulation optimization methods. We describe the application of a multi-fidelity simulation optimization framework to a repair scheduling problem on a production line. When several machines are out of action on a production line, it is not obvious how to choose the order in which they should be repaired and the optimal choice will depend on the current state of the line. Simulation can be used to estimate the throughput of the line in the short term future for different repair orders and a given system state, but the speed at which results are required necessitates the development of an efficient optimization framework that minimizes the number of replications made of the complex simulation model. Towards A Novel Protocol for Efficient Control and Usage of Traffic Simulation Zhuoxiao Meng (Technische Universität München, Huawei Munich Research Center) Abstract Abstract In this work, we propose the Traffic Simulation Control Protocol (TraSCoP), a universal protocol for traffic simulation systems, allowing efficient simulation of traffic management systems by controlling running traffic simulations from external connected applications. Compared to existing approaches, TraSCoP enables the direct retrieval of temporally aggregated data with an event-driven control architecture, resulting in a substantial reduction in the communication overhead and hence a better simulation performance. A proof-of-concept study on the simulation of an adaptive traffic light control system demonstrates a 4.5 times speed-up using TraSCoP compared to TraCI, a widely used protocol for controlling traffic simulators. Towards an Online Data Analysis Architecture for Large-Scale Distributed Simulations XIAORUI DU (Technische Universität München) Abstract Abstract Online data analysis plays an important role in simulations. However, with the rise of distributed and large-scale simulations, designing an efficient online data analysis architecture is particularly challenging, since it requires efficiently retrieving and processing the massive data produced by the distributed simulation. Many of existing solutions use high performance computing resource or big data platforms to build online data analysis systems. However, none of them can be applied to distributed simulations. In our work, we propose a novel online data analysis architecture based on the concept of Modelling and Simulation as a Service (MSaaS) with the goal of supporting efficient data analysis in large scale distributed simulations. Towards Performance-Aware Partitioning for Large-Scale Agent-Based Microscopic Distributed Traffic Simulation Anibal Siguenza-Torres (Technische Universität München, Huawei Munich Research Center) Abstract Abstract To scale up the performance in large-scale agent-based microscopic traffic simulations, parallel distribution is one of the ways to achieve it. One of the most determinant factors for good performance is the partitioning of the road network. To achieve high performance, the partitions need to have a good load balancing and minimize the communication cost. Many approaches use the number of agents as proxy to estimate the computational and communication costs, assuming a direct relation. However, depending on the simulation logic and the runtime environment, this may not hold true. This work instead proposes to directly measure the communication and computational cost, and to use this information to generate performance-aware partitions. We believe that this would exploit better the system capabilities resulting in higher performance. Stochastic Root Finding via Bayes Decisions Chuljin Park and Dong Hyun Kim (Hanyang University) and Seong-Hee Kim (Georgia Institute of Technology) Abstract Abstract We consider the root finding problem of a one-dimensional function when the function can be only estimated by noisy responses and a unique root exists between given lower and upper bounds. A new approach, namely the trisection algorithm with Bayes decisions (TAB), is proposed. We investigate the theoretical properties of TAB and empirically compare the proposed algorithm with several existing algorithms. Leveraging OSIRIS to simulate real-world ransomware attacks on organization Jeongkeun Shin, Geoffrey Dobson, Richard Carley, and Kathleen Carley (Carnegie Mellon University) Abstract Abstract The scale of ransomware damage increases every year. It is difficult to predict the magnitude of the ransomware damage to the organization since many human factors are involved as ransomware infection usually starts with end users downloading malware from the phishing email or message. In this paper, we leveraged OSIRIS (Organization Simulation In Response to Intrusion Strategies) framework to simulate the Avaddon ransomware attack to virtual organizations and analyze how three factors, organization size, proportion of communication, and end users' cybersecurity expertise level, affect to the overall impact and propagation of ransomware damage inside the organization. Theory-guided Neural Network for Agent-based Modeling and Simulation Hiroaki Yamada and Shohei Yamane (Fujitsu Ltd.) Abstract Abstract Recently, constructing ABS from detailed and large amounts of behavioral data and a statistical model, such as a hidden markov models or a recurrent neural network, have been attracted attention. However, it is difficult to utilize such models for assessing social policies because of a lack of behavioral data under the policies. In this paper, we propose a new training framework based on theory-guided neural network, which trains neural networks taking advantage of theoretical knowledge. How Fault Lines Formed In The Organization Influences On Double-loop Learnig Kazuma Midorikawa and Shingo Takahashi (Waseda University) Abstract Abstract One of the factors that inhibit double-loop learning in organizations is the formation of fault lines (hereafter referred to as "FL") caused by diversity in the organization. FL have a negative impact on the organization because they promote interactions only among individuals only with similar attributes, and interactions among individuals with various attributes throughout the organization cannot be promoted. This paper builds a model to analyze how the strength of FL influences on organizational learning of individuals and the entire organization. SUMMIT: A Multi-Modal Agent-Based Simulation Platform for Urban Transit Systems Nasri Bin Othman, Vasundhara Jayaraman, Wyean Chan, Zhen Xiang Kenneth Loh, Rishikeshan Rajendram, Rakhi Manohar Mepparambath, Pritee Agrawal, Muhamad Azfar Ramli, and Zheng Qin (Institute of High Performance Computing) Abstract Abstract We present a city-scale transportation simulation platform, named SUMMIT (Singapore Urban Multi-Modal Integrated Transport Simulator), that integrates multiple public transit systems together with a central commuter control. At the core of SUMMIT, a message passing framework called Fabric (Fast, Agent-Based, Reproducible, Integrated Co-simulation) helps synchronize the different transit systems and commuters transiting between them. Using Fabric, each transit system can be implemented independently as a stand-alone simulator, and the central commuter control is responsible for generating the route choice decisions. SUMMIT integrates currently three key public transit modes in Singapore: train, bus and taxi. This holistic simulation platform can be particularly useful for analyzing the complex dynamics in urban transit systems. In an application of SUMMIT, we simulate a hypothetical major train disruption in Singapore. We analyze mitigation scenarios, where bridging buses are deployed, and the impact of information dissemination delay on commuters. Exploring the Complexity in Managing End-of-Life Lithium-Ion Battery: A System Dynamics Perspective Bhanu Pratap (Indian Institute of Technology Madras), Krishna Mohan T V (University of Exeter Business School), R K Amit (Indian Institute of Technology Madras), and Shankar Venugopal (Mahindra & Mahindra) Abstract Abstract The growing share of electric vehicles (EVs) in the transportation sector presents a challenge in handling end-of-life (EOL) lithium-ion batteries (LIB) that serve as the primary power source of EVs. EOL LIB can be recycled to obtain strategic raw material by using different recycling methods, or it can be used as second-use LIB. This research presents a systematic approach for analyzing the trade-off between LIB recycling and LIB second-use by applying system dynamics modeling. Our study shows that collection rate increment will reduce the landfill and increases the quantity of recovered strategic raw materials. Model results indicate that cascading use of LIB for stationary storage applications will limit their availability for recycling. Cascade use of EOL LIB eases the demand for new LIB by using the repurposed LIB for stationary energy storage applications but it ceases the recovery of recycled material by extending the life of LIB. Analytical and Simulation-Driven Machine Learning Methods for Generating Real-Time Outpatient Length-of-Stay Predictions Najiya Fatma and Varun Ramamohan (Indian Institute of Technology Delhi) Abstract Abstract In this work, we consider real-time prediction of lengths-of-stay (LOS) for outpatients at a primary healthcare (PHC) facility via two methods: an analytical queuing-theoretic predictor, and simulation-driven machine learning (SimML) predictors. These LOS predictions are made at the point in time at which the patient is expected to arrive at the facility (i.e., at time t+δ), using the system state of the PHC recorded at current time t. We develop a discrete-event simulation (DES) of operational flows of outpatients, inpatients and childbirth patients treated at the PHC. Both the analytical and SimML predictors use real-time system state information such as the number of patients waiting and elapsed service times of patients undergoing service. SimML predictors are trained using system state data generated by the DES for each outpatient. LOS predictions can inform patient decisions regarding which facility to visit and equitably manage resource utilization across a network of similar facilities. Simulation Development Environment using Simulation Snapshot Manager Jaiyun Lee (Hanbat National University) Abstract Abstract Modeling and simulation solve problems by developing simulator that expresses the target system in the real world with computer code. However, as the problem to be solved becomes more complex, it takes much time to develop a simulation model and software. Therefore, we propose a development methodology and environment that can efficiently implement simulator by reusing the verified model and implementing a new one. The proposed method saves the simulation model verified in the simulation development and operation process and reuses it in the new simulation model development process and execution process. Model Based Reconfigurable Unmanned System Using Discrete Event System Formalism Seyoung Han, Jaiyun Lee, Changbeom Choi, and Eunkyung Kim (Hanbat National University) Abstract Abstract An unmanned system is widely used in various fields of society. In order to develop a service using unmanned systems, a developer should understand the hardware and software of the systems. However, the developer may not develop a service quickly since service developers cannot have all the knowledge. This paper proposes a multiple unmanned system control and management environment using discrete event system formalism. The proposed system supports the modeling of control command sequences using the discrete event system formalism and implements it as a simulation model with a hardware controller. Also, the system may reuse discrete event system models to control multiple unmanned systems. Simulation-Optimization Configurations for Fugitive Interception Irene S. van Droffelaar and Jan H. Kwakkel (Delft University of Technology), Jelte P. Mense (Utrecht University), and Alexander Verbraeck (Delft University of Technology) Abstract Abstract Simulation-optimization can be used to support near-real-time decision-making, but timely calculation of the solution is essential. Besides increasing computation power and algorithm efficiency, the configuration in which simulation and optimization are combined can reduce the computation time of simulation-optimization of large problems. We compare two configurations using a fugitive interception problem and show the potential of sequential simulation-optimization to mitigate the expensive optimization of simulation models. A Bayesian Optimization Algorithm for Constrained Problems with Heteroscedastic Noise Sasan Amini (Flanders Make and Data Science Institute (Hasselt University)) and Inneke van Nieuwenhuyse (Hasselt University) Abstract Abstract In this research, we develop a Bayesian optimization algorithm to solve expensive constrained problems. We consider the presence of heteroscedastic noise in the evaluations, and propose an identification procedure that considers this uncertainty in recommending the final optimal solution(s). The primary experimental results show that the proposed algorithm is capable of finding a set of optimal (or near-optimal) solutions in the presence of noisy observations. Optimising and Analysing the Use of Drones in Healthcare Melanie Reuter-Oppermann and Sara Ellenrieder (Technische Universität Darmstadt) Abstract Abstract Providing timely access to healthcare for all inhabitants of a country is extremely challenging, especially in rural areas and during crises. Digital innovations like telehealth or the use of drones can help to improve the access to care. Therefore, we have developed mathematical models to locate drones to transport defibrillators to out-of-hospital cardiac arrests as well as blood products to hospitals. The expected utilisation of these drones and the expected care improvements are evaluated within a discrete event simulation. Three Tier Incremental Approach to Development of Smart Corridor Digital Twin Abhilasha Saroj, Dickness Kwesiga, Angshuman Guin, and Michael Hunter (Georgia Institute of Technology) Abstract Abstract In the development of smart corridor traffic operations applications that utilize real time traffic and infrastructure data, a digital twin can be a crucial testbed. However, the development of such a digital twin test bed requires the integration of several dynamic components. Existing literature lacks the framework for such a development effort. This effort seek to address this shortfall. Along with needed data investigations, this effort presents a three-tiered incremental framework to digital twin development: 1) development of a prepopulated historic data driven model, 2) development of a pseudo digital twin architecture, and 3) development of a real time data driven digital twin. The three-tiered approach provides guidelines to develop different “mock” digital twin platforms, enabling the execution of multiple trials in faster simulation environments and incremental digital twin architecture updates. Last-Mile Fulfillment in an Omnichannel Grocery Retailing Environment: A Simulation Study Yale Herer and Noemie Balouka (Technion - Israel Institute of Technology) Abstract Abstract We presents a dynamic solution approach for solving the last-mile fulfillment decision in an omnichannel grocery retailing environment. Each incoming order can be fulfilled either from the dark store or from a brick-and-mortar (B&M) store. In the existing system, online customers are offered only those products available in the dark store and the B&M store. Our goal is to increase the offering to online customers to products available in the dark store or the B&M store. We develop dynamic last-mile fulfillment policies whose goal is to minimize overall costs. We distinguish orders according to the location(s) that can fill the order. By means of a computational study, we compare our dynamic policies both to the omnisciently optimal solution and the legacy policy. We find that our policies achieve near optimal solutions. Analysis of Covid-19 Using a Modified SEIR Model to Understand the Cases Registered in Singapore, Spain y Venezuela Raúl Isea (Fundación Instituto de Estudios Avanzados) and Rafael Mayo-García (CIEMAT) Abstract Abstract This work proposes a modification of a compartmental-type model based on the Susceptible-Exposed-Infected-Recovered (SEIR) scheme to describe the dynamics of contagion by Covid-19. As an example, the different incidents that occurred in Singapore, Spain, and Venezuela have been analyzed to demonstrate the usefulness of the methodology developed in this work, which can be extended to other countries. Robots in Logistics: Research Issues and Trends Kyung-A Kim, Boram Kim, and Hosang Jung (Inha university) Abstract Abstract Robots are being used a lot in logistics these days. In line with this trend, many researchers are studying robots in logistics. This research identifies the research topics and trends in such research works over the past 20 years. To do this, a Latent Dirichlet Allocation, a topic modeling approach, was applied. A total of 16 topics are extracted, and the analysis shows that these topics are related to both application areas and the robot-related technologies. Also, the topics regarding the robot-related technologies can be divided into the mechanical ones and the control/optimization algorithms. When it comes to the application areas of robots in logistics, most of the existing research articles are focusing on the transportation part. Finally, the change in the ratio of these 16 topics by year is investigated and summarized. Referenced Filtering: a Case for Avoiding Each-to-each Computations Victor Diakov and Tanvi Anandpara (Simfoni Ltd) pdf Real-Time Indoor Daylight Illuminance Simulation of an Existing Building using Minimal Data Hyeong-Gon Jo, Seo-Hee Choi, and Cheol-Soo Park (Seoul National University) Abstract Abstract This study suggests a real-time indoor daylight simulation method using minimal data (two reference sensors and two prior measurements at target points). The minimal data was substituted for components in the daylight coefficient equation. The minimalistic daylight prediction approach was successfully validated with on-site measurement of seven days (7.3% of MAPE, 0.3% of NMBE). Interaction Modeling for Independent Water and Energy Models with Distributed Simulation Hessam Sarjoughian and Mostafa Fard (Arizona State University) Abstract Abstract Modeling the interactions between separate models contributes to building flexible hybrid simulation frameworks. Interaction models facilitate the development of composition of disparate that are separately developed models. As such, given different types of models, they can be combined using other models. This is achieved using the Knowledge Interchange Broker (KIB) approach. The nexus of disparate models to be composed defines Interaction Models. This approach is grounded in modular input/output components with explicit specifications for time, data mapping, synchronized control, and concurrent execution. The Parallel DEVS and RESTful framework is used as a realization of this approach for modeling and simulating an exemplar Water-Energy system for Phoenix, Arizona. Specifically, a DEVS-based Interaction Model (DEVS-IM) framework is developed based on System Theory, KIB, and componentized WEAP (i.e., for water models) and LEAP (i.e., for energy models) simulators. Case Study Competition Finalists' Presentations I Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Case Study Competition Finalists' Presentations Haobin Li (National University of Singapore) Abstract Abstract In line with the spirit of “Reimagine Tomorrow”, the theme for this year’s Winter Simulation Conference (WSC), we are thrilled to introduce our Case Study Competition which will examine the role of simulation in next-generation industrial systems as well as plant the seeds of collaboration between academia and industry. Titled “Smart Simulation for Intelligence Incubation”, the competition aims at demonstrating and promoting simulation’s ability to cooperate with optimization rules and data learning in order to improve the overall performance of intelligent systems. It will provide participants with an opportunity to explore and exploit the use of simulation tools in supporting real-time decision analysis under different scenarios. Well-known for its efficiency in logistics services, Singapore is currently in the midst of developing the world’s largest next-generation container port. 2022 WSC examines the country’s next-generation logistics and port systems integration and how simulation can provide quality solutions in planning for the future. To further support this effort, the organizers have selected a classic case –automated grid mover system– from the maritime port and logistics industry as the main case study for the competition. In this presentation, the top 5 teams will present their results. Case Study Competition Finalists' Presentations II Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Case Study Competition Finalists' Presentations Haobin Li (National University of Singapore) Abstract Abstract In line with the spirit of “Reimagine Tomorrow”, the theme for this year’s Winter Simulation Conference (WSC), we are thrilled to introduce our Case Study Competition which will examine the role of simulation in next-generation industrial systems as well as plant the seeds of collaboration between academia and industry. Titled “Smart Simulation for Intelligence Incubation”, the competition aims at demonstrating and promoting simulation’s ability to cooperate with optimization rules and data learning in order to improve the overall performance of intelligent systems. It will provide participants with an opportunity to explore and exploit the use of simulation tools in supporting real-time decision analysis under different scenarios. Well-known for its efficiency in logistics services, Singapore is currently in the midst of developing the world’s largest next-generation container port. 2022 WSC examines the country’s next-generation logistics and port systems integration and how simulation can provide quality solutions in planning for the future. To further support this effort, the organizers have selected a classic case –automated grid mover system– from the maritime port and logistics industry as the main case study for the competition. In this presentation, the top 5 teams will present their results. |
Plenary WSC@Singapore: Reimagine Tomorrow Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Plenary Titans of Simulation: Michael Fu Chair: Ek Peng Chew (National University of Singapore, Centre for Next Generation Logistics) Stochastic Gradients: From Single Sample Paths to Conditional Monte Carlo to Machine Learning pdfTitan_Day 1 Screen View.mp4 from INFORMS on Vimeo. Plenary Titans of Simulation: Leon McGinnis Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) Reimagining Simulation in Discrete-Event Logistics Systems pdf2022_WSC_Titan_Of_Simulation from INFORMS on Vimeo. Track Coordinator - Analysis Methodology: David J. Eckman (Texas A&M University), Jun Luo (Shanghai Jiao Tong University), Wei Xie (Northeastern University) Analysis Methodology Uncertainty Quantification Chair: Raghu Pasupathy (Purdue University) Analysis Methodology Metamodels and Optimization Chair: Mina Jiang (Arizona State University) Robust Simulation Design for Generalized Linear Models in Conditions of Heteroscedasticity or Correlation pdfAnalysis Methodology Quantile Estimation Chair: Drupad Parmar (Lancaster University) Analysis Methodology Estimating Densities and Rare Events Chair: Bruno Tuffin (Inria, University of Rennes) Density Estimators of the Cumulative Reward up to a Hitting Time to a Rarely Visited Set of a Regenerative System Best Contributed Theoretical Paper - Finalist pdfAnalysis Methodology Random Processes and Optimization Chair: Wei Xie (Northeastern University); Zhengchang Hua (Southern University of Science and Technology, University of Leeds) Track Coordinator - Advanced Tutorials: Wai Kin (Victor) Chan (Tsinghua-Berkeley Shenzhen Institute, TBSI), Hong Wan (North Carolina State University) Advanced Tutorials Distributed Agent-based Simulation with Repast4Py Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Advanced Tutorials Let's do Ranking & Selection Chair: Hong Wan (North Carolina State University) Advanced Tutorials Hybrid Simulation Modeling Formalism via O²DES Framework for Mega Container Terminals Chair: Michael Kuhl (Rochester Institute of Technology) Advanced Tutorials EMS Operations Management: Simulation, Optimization, and New Service Models Chair: Wentong Cai (Nanyang Technological University) Advanced Tutorials From Discovery to Production: Challenges and Novel Methodologies for Next Generation Biomanufacturing Chair: Nan Kong (Purdue University) Advanced Tutorials A Tutorial on How to Set Up a System Dynamic Simulation on the Example of the Covid-19 Pandemic Chair: Jonathan Ozik (Argonne National Laboratory) Advanced Tutorials Advanced Tutorial: Methods for Scalable Discrete Simulation Chair: Abdelgafar Hamed (Infineon Technologies AG) Track Coordinator - Agent-Based Simulation: Chris Kuhlman (University of Virginia), Bhakti Stephan Onggo (University of Southampton) Agent-based Simulation, Logistics, Supply Chains, Transportation Organization of Transport Systems Chair: Michael Kuhl (Rochester Institute of Technology) Designing Mixed-Fleet of Electric and Autonomous Vehicles for Home Grocery Delivery Operation: An Agent-Based Modelling Study pdfAgent-based Simulation Methodological Issues with Multi-Agent Games Chair: Yan Lu (Old Dominion University) Agent-based Simulation Emergent Behaviors and Construction Labor Productivity Chair: Chris Kuhlman (University of Virginia) Identifying Correlates of Emergent Behaviors In Agent-Based Simulation Models Using Inverse Reinforcement Learning Best Contributed Theoretical Paper - Finalist pdfAgent-based Simulation Evacuation Modeling and Societal Polarization Chair: Anastasia Anagnostou (Brunel University London) Simulation-based Analysis of Evacuation Elevator Allocation for A Multi-level Hospital Emergency Department Best Contributed Applied Paper - Finalist pdfTrack Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), John Shortle (George Mason University) Aviation Modeling and Analysis Aviation I : Human-in-the-Loop Chair: Dehghani Mohammad (Northeastern University) Modelling Aircraft Priority Assignment by Air Traffic Controllers during Taxiing Conflicts Using Machine Learning pdfAviation Modeling and Analysis Aviation II: Aviation Operations and Airspace Chair: Michael Schultz (Bundeswehr University Munich) Towards Automated Apron Operations - Training of Neural Networks for Semantic Segmentation using Synthetic LiDAR Sensors pdfTrack Coordinator - Complex and Resilient Systems: Saurabh Mittal (MITRE Corporation), Claudia Szabo (The University of Adelaide, University of Adelaide) Complex and Resilient Systems Modeling and Data and their Effect on Policy Chair: Claudia Szabo (University of Adelaide, The University of Adelaide) SITEM: A Framework for Integrated Transport and Energy Systems Modelling for City-wide Electrification Scenario Planning pdfTrack Coordinator - Covid-19 and Epidemiological Simulations: Edward Huang (George Mason University), Hui Xiao (Southwestern University of Finance and Economics) COVID-19 and Epidemiological Simulations Effectiveness of Interventions Against the Spread of COVID-19 Chair: Edward Huang (George Mason University) Regional Maximum Hospital Capacity Estimation for Covid-19 Pandemic Patient Care in Surge through Simulation Best Invited Applied Paper - Finalist pdfCOVID-19 and Epidemiological Simulations Modeling the Spread of COVID-19 Chair: Felisa Vazquez-Abad (Hunter College CUNY) COVID-19 and Epidemiological Simulations Models and Case Studies of COVID-19 Impacts and Interventions Chair: Philippe J. Giabbanelli (Miami University) COVID-19 and Epidemiological Simulations Agent-based Models for Tracking the Spread of COVID-19 Chair: Xiao Feng Yin (Institute of High Performance Computing, A*STAR Singapore) Assessing Transmission Risks of SARS-CoV-2 Omicron Variant in U.S. School Facilities and Mitigation Measures pdfTrack Coordinator - Data Science and Simulation: Abdolreza Abhari, Abdolreza Abhari (Ryerson University), Cheng-bang Chen (University of Miami), Mani Sharifi (Ryerson University) Data Science and Simulation Artificial Intelligence/Machine Learning in DSS I Chair: Abdolreza Abhari High-Resolution Shape Deformation Prediction in Additive Manufacturing using 3D CNN Best Contributed Applied Paper - Finalist pdfData Science and Simulation Artificial Intelligence/Machine Learning in DSS II Chair: Rong Zhou (IHPC, A-STAR) Data Science and Simulation Model (Theory) in DSS Chair: Rie Gaku (St. Andrew’s University, Momoyama Gakuin University) Exact Optimal Fixed Width Confidence Interval Estimation for the Mean Best Contributed Theoretical Paper - Finalist pdfFeature-Modified SEIR Model for Pandemic Simulation and Evaluation of Intervention Approaches Best Contributed Theoretical Paper - Finalist pdfData Science and Simulation Service Operations Management in DSS Chair: Philippe J. Giabbanelli (Miami University) A Self-Adaptive Search Space Reduction Approach for Offshore Wind Farm Installation using Multi-Installation Vessels pdfData Science and Simulation Simulation-based Analytics in DSS Chair: Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Environmental and Sustainability Applications Buildings and Cities Chair: Albert Thomas (Indian Institute of Technology Bombay) Environmental and Sustainability Applications Agriculture and Farming Chair: Albert Thomas (Indian Institute of Technology Bombay) Environmental and Sustainability Applications Construction and Infrastructure Chair: Neda Mohammadi (Georgia Institute of Technology) Using Simulation-Based Forecasting to Project Singapore’s Future Residential Construction Demand and Impacts on Sustainability pdfEnvironmental and Sustainability Applications, Simulation Down Under Simulation Down Under Chair: David Post (CSIRO) Track Coordinator - Financial Engineering: Ben Feng (University of Waterloo), Guangwu Liu (City University of Hong Kong) Financial Engineering Importance Sampling in Financial Engineering Chair: Kun Zhang (City University of Hong Kong) Combining Retrospective Approximation with Importance Sampling for Optimising Conditional Value at Risk pdfFinancial Engineering Modeling and Estimating Financial and Actuarial Risks Chair: Ben Feng (University of Waterloo) Financial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) Grand Challenges Grand Challenges in Simulation Application Domains Chair: Oliver Rose (University of the Bundeswehr Munich) Track Coordinator - Healthcare Applications: Bjorn Berg (University of Minnesota), Christine Currie (University of Southampton), Masoud Fakhimi (University of Surrey) Healthcare Applications Operational Planning for Critical Patients Chair: Vishnunarayan Girishan Prabhu (Clemson University) Healthcare Applications Discrete-Event Simulation Modeling to Address Operational Questions in Healthcare Chair: Fumiya Abe-Nornes (University of Michigan) Healthcare Applications Simulation Optimization-based Methodology for Healthcare Chair: Lambros Viennas (University of Surrey, Bridgnorth Aluminium Ltd.) Healthcare Applications Simulation Models Evaluating Patient Flow in Different Care Settings Chair: S. M. Niaz Arifin (University of Notre Dame) Discrete Event Simulation to Evaluate Shelter Capacity Expansion Options for LGBTQ+ Homeless Youth pdfHealthcare Applications Discrete-Event Simulation Models to Inform Healthcare Decisions Chair: Jung Hyup Kim (University of Missouri) Simulation And Analysis Of Disruptive Events On A Deterministic Home Health Care Routing And Scheduling Solution pdfHealthcare Applications Simulation Models to Inform Healthcare Decisions I Chair: Alison Harper (University of Exeter) Healthcare Applications Simulation Models to Inform Healthcare Decisions II Chair: Georgiy Bobashev (RTI International) Track Coordinator - Hybrid Simulation: Andrew J. Collins (Old Dominion University), Caroline C. Krejci (The University of Texas at Arlington), Antuela Tako (Loughborough University, University of Kent) Hybrid Simulation Hybrid Simulation with Advanced Technology Chair: Le Khanh Ngan Nguyen (University of Strathclyde) Explainable AI for Data Farming Output Analysis: A Use Case for Knowledge Generation through Black-Box Classifiers pdfHybrid Simulation Hybrid Simulation Methodology Chair: David Bell (Brunel University London); Antuela Tako (Loughborough University, University of Kent) Hybrid Simulation Hybrid Simulation Applications Chair: David Bell (Brunel University London) Hybrid Simulation Hybrid Simulation in Human Systems Chair: Andrew J. Collins (Old Dominion University) A System Dynamics Model for Studying the Resiliency of Supply Chains and Informing Mitigation Policies for Responding to Disruptions pdfTrack Coordinator - Introductory Tutorials: Anastasia Anagnostou (Brunel University London), Canan Gunes Corlu (Boston University) Introductory Tutorials Tutorial: Metamodeling for Simulation Chair: Chris Kuhlman (University of Virginia) Introductory Tutorials How to Build Valid and Credible Simulation Models Chair: Edward Y. Hua (MITRE Corporation) Introductory Tutorials Resource Modeling in Business Process Simulation Chair: Masoud Fakhimi (University of Surrey) Introductory Tutorials Computer Assisted Military Experimentations Chair: Anastasia Anagnostou (Brunel University London) Introductory Tutorials Simheuristics: An Introductory Tutorial Chair: Canan Gunes Corlu (Boston University) Introductory Tutorials Simulation: The Critical Technology in Digital Twin Development Chair: Canan Gunes Corlu (Boston University) Introductory Tutorials Defining DEVS Models Using the Cadmium Toolkit Chair: Cristina Ruiz-Martín (Carleton University); Gabriel Wainer (Carleton University) Introductory Tutorials Digital Twin as an Aid for Decision-making in the Face of Uncertainty Chair: Andrea Ferrari (Politecnico di Torino) Track Coordinator - Logistics, Supply Chains, Transportation: Dave Goldsman (Georgia Institute of Technology), Markus Rabe (TU Dortmund University, MB / ITPL), Lei Zhao (Tsinghua University) Agent-based Simulation, Logistics, Supply Chains, Transportation Organization of Transport Systems Chair: Michael Kuhl (Rochester Institute of Technology) Designing Mixed-Fleet of Electric and Autonomous Vehicles for Home Grocery Delivery Operation: An Agent-Based Modelling Study pdfLogistics, Supply Chains, Transportation Intralogistics Chair: Xueping Li (University of Tennessee) Design and Control of Shuttle-based Storage and Retrieval Systems Using a Simulation Approach Best Contributed Theoretical Paper - Finalist pdfLogistics, Supply Chains, Transportation Machine Learning and Data Analysis Chair: Maylin Wartenberg (Hochschule Hannover) Application of Deep Reinforcement Learning for Planning of Vendor-Managed Inventory for Semiconductors pdfLogistics, Supply Chains, Transportation AI and Optimization Chair: Steffen Strassburger (Technische Universität Ilmenau) Solving Facility Location Problems for Disaster Response Using Simheuristics and Survival Analysis: A Hybrid Modeling Approach pdfLogistics, Supply Chains, Transportation Distribution and Warehouse Optimization Chair: Bhakti Stephan Onggo (University of Southampton) Determining the Optimal Work-Break Schedule of Temporary Order Pickers in Warehouses Considering the Effects of Physical Fatigue pdfDecision-making Impacts of Originating Picking Waves Process for a Distribution Center Using Discrete-event Simulation pdfLogistics, Supply Chains, Transportation Manufacturing Optimization Chair: Hai Wang (SMU) Logistics, Supply Chains, Transportation Transportation and Logistics Scheduling Chair: Klaus Altendorfer (Upper Austrian University of Applied Science) Logistics, Supply Chains, Transportation Food and Health Chair: Christos Alexopoulos (Georgia Institute of Technology) Logistics, Supply Chains, Transportation Supply Chain Applications Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities) Logistics, Supply Chains, Transportation Urban and Local Transport Chair: Marvin Auf der Landwehr (Hochschule Hannover) A Simulation-Optimization Model for Automated Parcel Lockers Network Design in Urban Scenarios in Pamplona (Spain), Zakopane, and Krakow (Poland) pdfCombining Survival Analysis and Simheuristics to Predict the Risk of Delays in Urban Ridesharing Operations with Random Travel Times pdfTrack Coordinator - Manufacturing Applications: Christoph Laroque (University of Applied Sciences Zwickau), Guodong Shao (National Institute of Standards and Technology) Manufacturing Applications Digital Twins in Manufacturing Chair: Guodong Shao (National Institute of Standards and Technology) Applying a Hybrid Model to Solve the Job-shop Scheduling Problem with Preventive Maintenance, Sequence-Dependent Setup Times And Unknown Processing Times pdfManufacturing Applications Machine Learning Chair: Giovanni Lugaresi (Politecnico di Milano) Application of Simulation based Reinforcement Learning for Optimizing Lot Dispatching Rules of Semiconductor Fab pdfDiscrete-Event Simulation and Machine Learning for Prototype Composites Manufacture Lead Time Predictions pdfManufacturing Applications Scheduling and Sequencing Chair: Thomas Felberbauer (St. Pölten University of Applied Sciences) Multi-Agent System Model For Dynamic Scheduling In Flexible Job Shops Subject To Random Machine Breakdown pdfReal-time Scheduling Based on Simulation and Deep Reinforcement Learning with Featured Action Space pdfManufacturing Applications Scheduling Chair: Christoph Laroque (University of Applied Sciences Zwickau) Manufacturing Applications Optimization Chair: Marina Meireles Pereira Mafia (University of Southern Denmark ) Enabling Knowledge Discovery from Simulation-based Multi-objective Optimization in Reconfigurable Manufacturing Systems pdfManufacturing Applications Simulation Approaches Chair: Klaus Altendorfer (Upper Austrian University of Applied Science) Manufacturing Applications Simulation Modeling Chair: Sumin Jeon (Siemens) Potential of Simulation Effort Reduction by Intelligent Simulation Budget Management for Multi-item and Multi-stage Production Systems pdfManufacturing Applications Simulation Applications Chair: Deogratias Kibira (National Institute of Standards and Technology, University of Maryland) Production Scheduling for Parallel Machines using Simulation Techniques: Case Study of Plastic Packaging Factory pdfA Biased-Randomized Simheuristic for a Hybrid Flow Shop with Stochastic Processing Times in the Semiconductor Industry pdfTrack Coordinator - Maritime System: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management), Xiuju Fu (Institute of High Performance Computing), Zhuo Sun (Dalian Maritime University) Maritime Systems Maritime Systems Panel Discussion Chair: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management) Maritime Systems Maritime Systems I Chair: Zhuo Sun (Dalian Maritime University) Maritime Systems Maritime Systems II Chair: Xinhu Cao (National University of Singapore, Industrial Systems Engineering and Management) Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Lars Moench (University of Hagen), Kan Wu (Chang Gung University) MASM MASM Panel: Industry-Academic Collaborations in Semiconductor Manufacturing Chair: John Fowler (Arizona State University) MASM Artificial Intelligence Applications Chair: Keyhoon Ko (VMS Global, Inc.) Maximizing Throughput, Due Date Compliance and Other Partially Conflicting Objectives Using Multifactorial AI-powered Optimization pdfMASM Fab Scheduling Chair: Dennis Xenos (Flexciton Limited) MASM Time Considerations in Semiconductor Manufacturing Chair: Raphael Herding (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) MASM Automated Material Handling Systems Chair: Young Jae JANG (Korea Advanced Institute of Science and Technology, Daim Research) MASM Semiconductor Manufacturing Equipment Chair: Dean Chu (National Taiwan University) MASM Batch Processing Chair: Kan Wu (Chang Gung University) MASM Simulation of Semiconductor Manufacturing Chair: Patrick Christoph Deenen (Eindhoven University of Technology, Nexperia) MASM Photolithography Scheduling Chair: Cathal Heavey (University of Limerick) Demonstration of the Feasibility of Real Time Application of Machine Learning to Production Scheduling pdfMASM Production Planning Chair: Tobias Voelker (University of Hagen) MASM Energy Considerations in Semiconductor Manufacturing Chair: Gabriel Weaver (Idaho National Laboratory); John Hasenbein (The University of Texas at Austin) MASM Scheduling Assembly/Test Operations Chair: Christian John Immanuel Boydon (National Taiwan University) Multi-agent Framework for Intelligent Dispatching and Maintenance in Semiconductor Assembly and Testing pdfMASM MASM Keynote Chair: Peter Lendermann (D-SIMLAB Technologies Pte Ltd) MASM Semiconductor Supply Chains Chair: Jan-Philip Erdmann (Infineon Technologies AG) Demand Predictability Evaluation for Supply Chain Processes Using Semantic Web Technologies Use Case pdfSimulated-Based Analysis Of Recovery Actions Under Vendor-Managed Inventory Amid Black Swan Disruptions In The Semiconductor Industry: A Case Study From Infineon Technologies AG pdfTrack Coordinator - Military and National Security Applications: Nathaniel D. Bastian (United States Military Academy), James Starling (U.S. Military Academy) Military and National Security Applications Remote Military and National Security Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Supervised Machine Learning for Effective Missile Launch Based on Beyond Visual Range Air Combat Simulations pdfMilitary and National Security Applications Military Keynote Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Military and National Security Applications Air-Defense and Naval Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Military and National Security Applications Military Communications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Message Prioritization in Contested and Dynamic Tactical Networks using Regression Methods and Mission Context pdfMilitary and National Security Applications Military Operations Applications Chair: Nathaniel D. Bastian (United States Military Academy); James Starling (U.S. Military Academy) Track Coordinator - Modeling Methodology: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET), Gabriel Wainer (Carleton University) Modeling Methodology Frameworks and Standards Chair: Adelinde Uhrmacher (University of Rostock) Towards a Unifying Framework for Modeling, Execution, Simulation, and Optimization of Resource-aware Business Processes pdfModeling Methodology DEVS Theory and Practice Chair: Cristina Ruiz-Martín (Carleton University) Modeling Methodology Applications Chair: Neal DeBhur (Arizona State University) Modeling Methodology Methodologies Chair: Ezequiel Pecker Marcosig (UBA, CONICET) Model Uncertainty and Robust Simulations Robust Simulation Optimization Chair: Sara Shashaani (North Carolina State University) Optimizing Input Data Acquisition for Ranking and Selection: A View Through the Most Probable Best pdfModel Uncertainty and Robust Simulations Decision Making under Input Uncertainty Chair: Wei Xie (Northeastern University) Sequential Importance Sampling for Hybrid Model Bayesian Inference to Support Bioprocess Mechanism Learning and Robust Control pdfDistributionally Robust Optimization for Input Model Uncertainty in Simulation-Based Decision Making pdfModel Uncertainty and Robust Simulations Uncertainty Quantification Chair: Zeyu Zheng (University of California, Berkeley) Distributional Discrimination Using Kolmogorov-Smirnov Statistics and Kullback-Leibler Divergence for gamma, log-normal, and Weibull distributions. pdfFinancial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) Track Coordinator - Professional Development Track: Weiwei Chen (Rutgers University), Seong-Hee Kim (Georgia Institute of Technology) Professional Development Survive and Thrive in Different Academic Systems: A Simulation Perspective Chair: Weiwei Chen (Rutgers University); Seong-Hee Kim (Georgia Institute of Technology) Track Coordinator - Project Management and Construction: Jing Du (University of Florida), Joseph Louis (Oregon State University) Project Management and Construction Data-driven Simulation for Construction Chair: Changbum Ahn (Seoul National University) Constructing an Audio Dataset of Construction Equipment from Online Sources for Audio-based Recognition pdfProject Management and Construction Simulation for Construction and Infrastructure Management Chair: Jinwoo Kim (University of Michigan) Construction Image Synthetization to Overcome a Small, Biased Real Training Dataset for DNN-Powered Visual Scene Understanding pdfProject Management and Construction Computer Vision and Ranging for Simulation Chair: Jinwoo Kim (University of Michigan) Project Management and Construction Machine Learning for Simulation in Construction Chair: Yitong Li (George Mason University) Accelerating Training Of Reinforcement Learning-Based Construction Robots In Simulation Using Demonstrations Collected In Virtual Reality pdfField-Based Assessment of Joint Motions in Construction Tasks with and without Exoskeletons in Support of Worker-Exoskeleton Partnership Modeling and Simulation pdfTrack Coordinator - Reliability Modeling and Simulation: Sanja Lazarova-Molnar (University of Southern Denmark, Karlsruhe Institute of Technology), Xueping Li (University of Tennessee), Olufemi Omitaomu (Oak Ridge National Laboratory) Reliability Modeling and Simulation Reliability Modeling and Simulation I Chair: Shima Mohebbi (George Mason University) Spatial Agent-based Simulation of Connected and Autonomous Vehicles to Assess Impacts on Traffic Conditions pdfReliability Modeling and Simulation Reliability Modeling and Simulation II Chair: Olufemi Omitaomu (Oak Ridge National Laboratory); Sanja Lazarova-Molnar (Karlsruhe Institute of Technology, University of Southern Denmark) Quantifying Error Propagation In Multi-stage Perception System Of Autonomous Vehicles Via Physics-based Simulation pdfReliability Modeling and Simulation Reliability Modeling and Simulation III Chair: Xueping Li (University of Tennessee); Parisa Niloofar (SDU) Uncertainty and Sensitivity Analyses on Solar Heat Gain Coefficient of a Glazing System with External Venetian Blind pdfTrack Coordinator - Scientific Applications: Rafael Mayo-García (CIEMAT), Esteban Mocskos (CSC-CONICET, University of Buenos Aires (AR)) Scientific Applications Scientific Applications I Chair: Rafael Mayo-García (CIEMAT) Covid-19 Suppression Using a Testing/Quarantine Strategy: a Multi-paradigm Simulation Approach Based on a Seirtq Compartmental Model pdfScientific Applications Scientific Applications II Chair: Rafael Mayo-García (CIEMAT) Design and Deployment of a Simulation Platform: Case Study of an Agent-Based Model for Youth Suicide Prevention pdfTrack Coordinator - Simulation and AI: Edward Y. Hua (MITRE Corporation), Yijie Peng (Peking University), Simon J. E. Taylor (Brunel University London) Simulation and AI Simulation and AI Methodology I Chair: Yijie Peng (George Mason University) Simulation and AI Simulation and AI Methodology II Chair: Claudia Szabo (University of Adelaide, The University of Adelaide) Use of Reinforcement Learning for Prioritizing Communications in Contested and Dynamic Environments pdfSimulation and AI Simulation and Artificial Intelligence: A Foundation for a New, Reimagined Tomorrow? Chair: Simon J. E. Taylor (Brunel University London) Simulation and AI Reinforcement Learning I Chair: Edward Y. Hua (MITRE Corporation) Simulation of the Internal Electric Fleet Dispatching Problem at a Seaport: A Reinforcement Learning Approach pdfSimulation and AI Reinforcement Learning II Chair: Edward Y. Hua (MITRE Corporation) Simulation and AI Simulation and AI Methodology III Chair: Kim van den Houten (Technische Universiteit Delft) Tree-Structured Parzen Estimators With Uncertainty For Hyperparameter Optimization Of Machine Learning Algorithms pdfSimulation and AI Simulation and AI Methodology IV Chair: Ruijiu Mao (National University of Singapore) Simulation and AI Applications I Chair: Dehghani Mohammad (Northeastern University) A Simulation-aided Deep Reinforcement Learning Approach for Optimization of Automated Sorting Center Processes pdfModeling Methodology, Simulation and AI Model Recognition and Identification Chair: Edward Y. Hua (MITRE Corporation) Simulation and AI Applications II Chair: Simon J. E. Taylor (Brunel University London) Transfer Learning For Prediction Of Supply Air Temperature From A Cooling System In An Existing Building pdfTrack Coordinator - Simulation as Digital Twin: Na Geng (Shanghai Jiaotong University), Andrea Matta (Politecnico di Milano), Yuan Wang (Singapore University of Social Science) Simulation as Digital Twin Digital Twins Applications Chair: Yuan Wang (Singapore University of Social Science) Simulation as Digital Twin Methodologies for Digital Twins Chair: Cathal Heavey (University of Limerick) Online Validation of Simulation-based Digital Twins Exploiting Time Series Analysis Best Contributed Applied Paper - Finalist pdfSimulation as Digital Twin Digital Twins in Logistics and Transportation I Chair: Andrea Matta (Politecnico di Milano) Simulation as Digital Twin Digital Twins in Logistics and Transportation II Chair: Jie Xu (George Mason University) Predictive Traffic Blocking to Avoid Congestion in Large-scale Automated Material Handling Systems pdfSimulation as Digital Twin Digital Twins in Manufacturing Chair: Giovanni Lugaresi (Politecnico di Milano) Data-Driven Simulation For Production Balancing And Optimization: A Case Study In The Fashion Luxury Industry pdfTrack Coordinator - Simulation Down Under: John Fowler (Arizona State University), David Post (CSIRO) Environmental and Sustainability Applications, Simulation Down Under Simulation Down Under Chair: David Post (CSIRO) Track Coordinator - Simulation Education: Christopher Lynch (Old Dominion University), Krzysztof J. Rechowicz (Old Dominion University) Simulation Education Simulation Education and Gaming Chair: Jayendran Venkateswaran (IIT Bombay); Leonardo Chwif (Escola de Engenharia Mauá, Mauá Institute of Technology) Simulation Teaching during the Pandemic: Report of an Experience in a Higher Education Private Institution pdfTrack Coordinator - Simulation Optimization: Siyang Gao (City University of Hong Kong), Guangxin Jiang (Harbin Institute of Technology, School of Management), Giulia Pedrielli (Arizona State University) Simulation Optimization Estimation Techniques for Simulation Optimization Chair: Susan R. Hunter (Purdue University) Central Limit Theorems for Constructing Confidence Regions in Strictly Convex Multi-Objective Simulation Optimization pdfFixed Budget Ranking and Selection with Streaming Input Data Best Contributed Theoretical Paper - Finalist pdfSimulation Optimization Advances in Ranking and Selection Chair: Jeff Hong (Fudan University) Simulation Optimization Applications of Simulation Optimization Chair: Michael Geurtsen (Eindhoven University of Technology, Nexperia) A Logistic Regression and Linear Programming Approach for Multi-skill Staffing Optimization in Call Centers pdfAn Inexact Variance-Reduced Method For Stochastic Quasi-Variational Inequality Problems With An Application In Healthcare pdfSimulation Optimization Multiobjective Simulation Optimization Chair: Matthew T. Ford (Cornell University) Optimal Computing Budget Allocation for Multi-Objective Ranking and Selection under Bernoulli Distribution pdfSimulation Optimization Search-based Simulation Optimization Algorithms I Chair: Michael Choi (Yale-NUS College and Department of Statistics and Data Science, National University of Singapore) Object-oriented Implementation and Parallelization of the Rapid Gaussian Markov Improvement Algorithm pdfSimulation Optimization Search-based Simulation Optimization Algorithms II Chair: Mark Semelhago (Amazon.com) Landscape Modification Meets Surrogate Optimization: Towards Developing an Improved Stochastic Response Surface Method pdfFinancial Engineering, Model Uncertainty and Robust Simulations, Simulation Optimization Sampling and Regression Techniques Chair: Jiaming Liang (Yale University; The Chinese University of Hong Kong, Shenzhen) Track Coordinator - Vendor Tracks: Ernie Lee, Chao Meng (University of Southern Mississippi), David T. Sturrock (Simio LLC), Edward Williams (PMC) Track Coordinator - Ph.D. Colloquium: Anatoli Djanatliev (University of Erlangen-Nuremberg), Siyang Gao (City University of Hong Kong), Chang-Han Rhee (Northwestern University), Cristina Ruiz-Martín (Carleton University) PhD Colloquium PhD Colloquium Emergency Vehicle Preemption Strategies: a Microsimulation Based Case Study on a Smart Signalized Corridor pdfUse of Social Determinants of Health in Agent-based Models for Early Detection of Cervical Cancer pdfDoes Financial Quantitative Easing Help Alleviate the Economic Disparity?: Results of Agent-based Simulation Models pdfSimulations for Optimizing Dispatching Strategies in Semiconductor Fabs Using Machine Learning Techniques pdfAgent-based Modelling of Farmers’ Climate-resilient Crop Choice in the Upper Mekong Delta of Vietnam pdfSupervised Machine Learning for Effective Missile Launch Based on Beyond Visual Range Air Combat Simulations pdfPoster Madness Poster Madness Chair: Cristina Ruiz-Martín (Carleton University) Dispatching in Real Frontend Fabs With Industrial Grade Discrete-Event Simulations by Use of Deep Reinforcement Learning pdfCharging Schedule Problem of Electric Buses Using Discrete Event Simulation with Different Charging Rules pdfSimulation-based Optimization For Operational Excellence Of A Digital Picking System In A Warehouse pdfLeveraging Causal Discovery Methods to Enhance Passenger Experience at Airport - An Analysis Method for Agent-based Simulators pdfIntegration of Discrete-event Simulation in the Planning of a Hydrogen Electrolyzer Production Facility pdfTowards Performance-Aware Partitioning for Large-Scale Agent-Based Microscopic Distributed Traffic Simulation pdfExploring the Complexity in Managing End-of-Life Lithium-Ion Battery: A System Dynamics Perspective pdfAnalytical and Simulation-Driven Machine Learning Methods for Generating Real-Time Outpatient Length-of-Stay Predictions pdfAnalysis of Covid-19 Using a Modified SEIR Model to Understand the Cases Registered in Singapore, Spain y Venezuela pdfCase Study Competition Finalists' Presentations I Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) |