Plenary · Plenary Keynote: Beyond Data: Data-Driven Digital Twins for Sustainable Future Cities Chair: Margaret Loper (Georgia Tech Research Institute) Beyond Data: Data-Driven Digital Twins for Sustainable Future Cities Jane Macfarlane (University of California at Berkley) Abstract Abstract Urbanization is straining our transportation systems and road networks. The resulting congestion, delays and air pollution lead to a declined quality of life but also greatly contributes to climate change. The confluence of dramatic changes in the availability of data that measures mobility with significant increases in computational capabilities will allow us to develop next-generation traffic management systems that make it easier for cities to create safe and fluid traffic networks, while balancing the needs of a large variety of travelers. Advanced traffic management systems, founded on data-driven digital twins, will predict traffic congestion patterns and find alternative routing and control mechanisms to distribute mobility. This talk will highlight our urban-scale, parallel discrete event simulation platform – Mobiliti. As important, this talk will highlight how we measure the improvements these technologies bring against the requirements of building livable, equitable cities. Vendor · Vendor [In person] ProModel Corporation Simulation Solutions – Better Than Ever ProModel Corporation Simulation Solutions – Better Than Ever Bruce Gladwin and Michael Rice (ProModel Corporation) Abstract Abstract ProModel Corporation has been delivering discrete-event simulation software solutions for over 30 years. Starting with our ProModel Optimization Suite, designed for the manufacturing industry, we now have five distinct Commercial-Off-The-Shelf (COTS) simulation technology products for virtually any industry as well as three market-focused platforms. This presentation will highlight the use case/value for each product and provide a high level overview of each product’s capabilities. In addition we are pleased to introduce the fourth edition of Simulation Using ProModel by Biman Gosh, Royce Bowden, Bruce Gladwin, and Charles Harrell published by Cognella Academic Publishing. Plenary · Plenary Titan Talk: Andreas Tolk: When Smart People share Smart Methods to create Smart Cities: How M&S enables Transdisciplinary Solutions Chair: Margaret Loper (Georgia Tech Research Institute) When Smart People share Smart Methods to create Smart Cities: How M&S enables Transdisciplinary Solutions Andreas Tolk (The MITRE Corporation) Abstract Abstract The pioneer in leadership studies, Warren Bennis, used the definition that “Leadership is the capacity to translate vision into reality.” But how do we do this in our world that we recognize more and more to be complex, agile, and full of non-linear relations? How do we bring the many experts needed to build smart cities together under a common vision? This talk discusses some challenges and offers ideas on how to bring experts from diverse disciplines together to work towards a common vision and translate it into reality. We see how ideas discussed within the Winter Simulation Conference can be applied. Using research examples about smart solutions from various levels, we will discuss the idea of conceptual alignment as the prerequisite for successful composition of smart methods into coherent solutions to show how M&S can not only provide tools, but conceptually lead the way for many complex problem solutions. Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [In person] M&S of Adaptive and Autonomous Systems Chair: Saurabh Mittal (MITRE Corporation) Development of a Reinforcement Learning-based Adaptive Scheduling Algorithm for Block Assembly Production Line Best Contributed Applied Paper - Finalist Jonghun Woo, young In Cho, and So Hyun Nam (Seoul National University) and Jong-Ho Nam (Korea Maritime and Ocean University, Seoul National Univeersity) Abstract Abstract Rule-based heuristic algorithms and meta-heuristic algorithms have been studied to solve the scheduling problems of production systems. In recent research, reinforcement learning-based adaptive scheduling algorithms have been applied to solve complex problems with high-dimensional and vast state space. A production system in shipyards is a high-variable system, in which various production factors such as space, workforce, and resources are related. Thus, adaptive scheduling according to the changes in the production system and surrounding environment must be performed in shipyards. In this study, a basic reinforcement learning model for scheduling problems of shipyards was developed. A simplified model of the panel block shop in shipyards was assumed and the optimal policy for determining the input sequence of blocks was learned to reduce the flow time. The open source-based discrete event simulation (DES) kernel SimPy was incorporated into the environment of the reinforcement learning model. A Traffic Avoidance Path Planning Method for Autonomous Vehicles in a Warehouse Environment Sriparvathi Bhattathiri, Maojia Li, and Michael Kuhl (Rochester Institute of Technology) Abstract Abstract Autonomous mobile robots are widely used today in supply chain, manufacturing, and service systems. Major challenges in these systems are dispatching and path planning. Centralized systems typically assume the knowledge of location, planned path, status of all mobile robots in the system. This paper presents a simulation-based decentralized path planning method that has a traffic prediction and avoidance component. The method is applied to autonomous mobile robots in a warehouse environment. Given only the starting, pick-up, and drop-off locations of the other robots in the system, we utilize a deep-learning algorithm to predict heavy traffic zones with the goal of minimizing travel time. We conduct a Monte Carlo simulation analysis to demonstrate the capabilities of the method. Panel · Plenary [In person] Panel: Women in Simulation Chair: Margaret Loper (Georgia Tech Research Institute) Women in Simulation Hamsa Bastani (University of Pennsylvania); Renee Thiesing (Simio LLC); and Jane Macfarlane (Institute of Transportation Studies, University of California at Berkeley) Abstract Abstract Join our distinguished panel of inspiring women from academia, industry and research who will share their unique career journeys. The event will be insightful, educational, and inspirational, as the participants offer their perspectives in leading and pursuing innovation in simulation. This session is focused on building a community of women in WSC by making connections, sharing stories and sage advice, and supporting a new generation of women as they grow their careers in modeling and simulation. Vendor · Vendor [In person] Overview of Arena 16.1 and Application of New Features Overview of Arena 16.2 New Features Gail Kenny (Rockwell Automation Inc.) Abstract Abstract The release of Rockwell Automation’s Arena 16.2 software offers both new and experienced Arena users an updated and enhanced tool to simulate and make decisions for their systems. In this presentation, the updated appearance and new capabilities of Arena 16.2 will be highlighted. Improvements to the user interface will be discussed, including new animation tools. In addition, new and enhanced modules, multiple instance, new reports, and other new features will be demonstrated. Emphasis will be on how these product updates enhance customer success with the application. Plenary · Plenary Titan Talk: David Nicol Challenges and Approaches to the Modeling and Simulation of Gargantuan Discrete Systems Chair: Margaret Loper (Georgia Tech Research Institute) Challenges and Approaches to the Modeling and Simulation of Gargantuan Discrete Systems David Nicol (University of Illinois) Abstract Abstract One of the most impactful ways that simulation is used in the physical sciences is to apply relatively simple rules to really large data states. Vendor · Vendor [In person] Simio’s new Neural Networks Features: An Iterative Process of Inference and Training The Application of Simio Scheduling in Industry 4.0 Ryan Luttrell and Adam Sneath (Simio LLC) Abstract Abstract Simulation has traditionally been applied in system design projects where the basic objective is to evaluate alternatives and predict and improve the long term system performance. In this role, simulation has become a standard business tool with many documented success stories. Beyond these traditional system design applications, simulation can also play a powerful role in scheduling by predicting and improving the short term performance of a system. Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Work Smarter, Not Harder: A Tutorial on Designing and Conducting Simulation Experiments Chair: Daniel García de Vicuña (Public University of Navarre) Work Smarter, Not Harder: A Tutorial on Designing and Conducting Simulation Experiments Susan M. Sanchez (Naval Postgraduate School), Paul Sanchez (retired), and Hong Wan (North Carolina State University) Abstract Abstract Simulation models are integral to modern scientific research, national defense, industry and manufacturing, and in public policy debates. These models tend to be extremely complex, often with thousands of factors and many sources of uncertainty. To understand the impact of these factors and their interactions on model outcomes requires efficient, high-dimensional design of experiments. Unfortunately, all too often, many large-scale simulation models continue to be explored in ad hoc ways. This suggests that more simulation researchers and practitioners need to be aware of the power of designed experiments in order to get the most from their simulation studies. In this tutorial, we demonstrate the basic concepts important for designing and conducting simulation experiments, and provide references to other resources for those wishing to learn more. This tutorial (an update of previous WSC tutorials) will prepare you to make your next simulation study a simulation experiment. Technical Session · MASM: Semiconductor Manufacturing [In person] MASM panel Chair: John Fowler (Arizona State University) Production-level Artificial Intelligence Applications in Semiconductor Manufacturing Chen-Fu Chien (National Tsing Hua University), Hans Ehm (Infineon Technologies AG), John Fowler (Arizona State University), Lars Moench (University of Hagen), and Cheng-Hung Wu (National Taiwan University) Abstract Abstract This is a panel paper which discusses the use of Artificial Intelligence (AI) techniques to address production level problems in semiconductor manufacturing. We have gathered a group of expert semiconductor researchers and practitioners from around the world who have applied AI techniques to semiconductor problems and the paper provides their answers to an initial set of questions. These serve to provide a description of the AI work that has taken place already and to make suggestions for future directions in this arena. Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Multiple Streams with Recurrence-Based, Counter-Based, and Splittable Random Number Generators Chair: Giulia Pedrielli (Arizona State University) Multiple Streams with Recurrence-Based, Counter-Based, and Splittable Random Number Generators Pierre L'Ecuyer (University of Montreal, Google); Olivier Nadeau-Chamard (University of Montreal); Yi-Fan Chen (Google Research); and Justin Lebar (Waymo) Abstract Abstract We give an overview of the state of the art on the design and implementation of random number generators for simulation and general Monte Carlo sampling in parallel computing environments. We emphasize the need for multiple independent streams and substreams of random numbers, as well as the advantages (and potential pitfalls) of the increasingly popular counter-based and dynamically splittable generators. We look at recently-proposed constructions and software. We also recall the basic quality criteria for good random number generators and their theoretical and empirical testing. The paper outlines solutions and also raises issues that would require further study. Technical Session, Introductory Tutorial · Commercial Case Studies [In person] Tested Success Tips For Simulation Project Excellence Chair: David T. Sturrock (Simio LLC) Tested Success Tips For Simulation Project Excellence Devdatta Deo and David T. Sturrock (Simio LLC) Abstract Abstract How can you make your projects successful? Modeling can certainly be fun, but it can also be quite challenging. With the new demands of Smart Factories, Digital Twins, and Digital Transformation, the challenges multiply. You want your first and every project to be successful, so you can justify continued work. Unfortunately, a simulation project is much more than simply building a model -- the skills required for success go well beyond knowing a particular simulation tool. Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Tutorial: Graphical Methods for the Design and Analysis of Experiments Chair: Kelsea B Best (Vanderbilt University) Tutorial: Graphical Methods for the Design and Analysis of Experiments Russell Barton (The Pennsylvania State University) Abstract Abstract You have built a simulation model, but now must choose runs to i) validate it, and ii) to gain insight about the associated real system and to make managerial recommendations. Do you need guidance? This introductory tutorial views the design of experiments as a five-step process, and presents graphical tools for each of the five steps. Further, with a graphical framework for the design, results can be presented graphically as well, helping you communicate the results visually to management in a way that builds trust. Technical Session · Environment and Sustainability Applications [In person] Environmental and Sustainability Applications 1 Chair: Suzanne DeLong (The MITRE Corporation) Sustainable Computing and Simulation: A Literature Survey Suzanne DeLong and Andreas Tolk (MITRE Corporation) Abstract Abstract Smart technologies are everywhere and the creation of a smart world, from smart devices to smart cities is rapidly growing to potentially improve quality of life. Businesses, governments, and individual users of smart technology expect a level of service and access to data that is achieved through data and supercomputing centers. These centers potentially consume vast amounts of power and their continued growth may be unsustainable and contribute to greenhouse gasses. As smart technologies rely heavily on such computational capabilities their sustainability is pivotal for a smart future. This paper explores the literature to: identify the problems; categorize the challenges as well as possible solutions; explore how simulation and machine learning can improve computational sustainability; and consider the need to conduct trade-off analysis to determine when to apply simulation and machine learning benefits. A taxonomy for sustainable computing is presented for future research. Modeling Multi-Level Patterns of Environmental Migration in Bangladesh: An Agent-Based Approach Kelsea B. Best, Ao Qu, and Jonathan Gilligan (Vanderbilt University) Abstract Abstract Environmental change interacts with population migration in complex ways that depend on interactions between impacts on individual households and on communities. These coupled individual-collective dynamics make agent-based simulations useful for studying environmental migration. We present an original agent-based model that simulates environment-migration dynamics in terms of the impacts of natural hazards on labor markets in rural communities, with households deciding whether to migrate based on maximizing their expected income. We use a pattern-oriented approach that seeks to reproduce observed patterns of environmentally-driven migration in Bangladesh. The model is parameterized with empirical data and unknown parameters are calibrated to reproduce the observed patterns. This model can reproduce these patterns, but only for a narrow range of parameters. Future work will compare income-maximizing decisions to psychologically complex decision heuristics that include non-economic considerations. Technical Session · Modeling Methodology [In person] Modeling Methodologies 2 Chair: Gabriel Wainer (Carleton University) Model Transformation across DEVS and Event Graph Formalisms Neal DeBhur and Hessam Sarjoughian (Arizona State University) Abstract Abstract This paper develops a model transformation mechanism across the Discrete Event System Specification (DEVS) and Event Graph (EG) modeling formalisms. We detail this cross-formalism model transformation from methodological and software implementation perspectives. By using simple, well-defined, and automated mechanisms of cross-formalism model transformation, modelers establish a plurality of vantage points, from which to understand and communicate model behavior. Model characteristics may be clarified, emphasized, obfuscated, or hidden across these different vantage points. This paper, therefore, serves as a step toward research into better modeling that can improve soft factors such as model reasoning and collaborative model design for developing better simulations. Exploiting Provenance and Ontologies in Supporting Best Practices for Simulation Experiments: A Case Study on Sensitivity Analysis Pia Wilsdorf, Nadine Fischer, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract Simulation studies are intricate processes and user support for conducting more consistent, systematic, and efficient simulation studies is needed. Simulation experiments as one crucial part of a simulation study can benefit from semi-automatic method selection, parameterization, and execution. However, this largely depends on the context in which the experiment is conducted. Context information about a simulation study can be provided in form of provenance that documents which artifacts contributed in developing a simulation model. We present an approach that exploits provenance to support best practices for simulation experiments. The approach relies on 1) explicitly specified provenance information, 2) an ontology of methods, 3) best practices rules, and 4) integration with a previously developed experiment generation pipeline. We demonstrate our approach by conducting a sensitivity analysis experiment within a cell biological simulation study. Data Generation with PROSPECT: a Probability Specification Tool Best Contributed Theoretical Paper - Finalist Alan Ismaiel, Ivan Ruchkin, Jason Shu, Oleg Sokolsky, and Insup Lee (University of Pennsylvania) Abstract Abstract Stochastic simulations of complex systems often rely on sampling dependent discrete random variables. Currently, their users are limited in expressing their intention about how these variables are distributed and related to each other over time. This limitation leads the users to program complex and error-prone sampling algorithms. This paper introduces a way to specify, declaratively and precisely, a temporal distribution over discrete variables. Our tool PROSPECT infers and samples this distribution by solving a system of polynomial equations. The evaluation on three simulation scenarios shows that the declarative specifications are easier to write, 3x more succinct than imperative sampling programs, and are processed correctly by PROSPECT. Technical Session · Modeling Methodology [In person] Agent Based Modeling Chair: Pia Wilsdorf (University of Rostock) Inferring Dependency Graphs for Agent-based Models Using Aspect-oriented Programming Justin Kreikemeyer, Till Koester, Adelinde Uhrmacher, and Tom Warnke (University of Rostock) Abstract Abstract Population-based CTMC models can generally be executed efficiently with stochastic simulation algorithms (SSAs). However, the heterogeneity in agent-based models poses a challenge for SSAs. To allow for an efficient simulation, we take SSAs that exploit dependency graphs for population-based models and adapt them to agent-based models. We integrate our approach with object-oriented frameworks for agent-based simulation by detecting dependencies via aspect-oriented programming (AOP). This way, modelers can implement models without manually recording dependency information, while still executing the models with efficient, dependency-aware SSAs. We provide an open-source implementation of our approach for the framework MASON, showing significant speedups in model execution. One Step At A Time: Improving The Fidelity Of Geospatial Agent-Based Models Using Empirical Data Amy A. Marusak (The University of Texas at Arlington); Anuj Mittal (Dunwoody College of Technology, School of Engineering); and Caroline C. Krejci (The University of Texas at Arlington) Abstract Abstract Agent-based modeling is frequently used to produce geospatial models of transportation systems. However, reducing the computational requirements of these models can require a degree of abstraction that can compromise the fidelity of the modeled environment. The purpose of the agent-based model presented in this paper is to explore the potential of a volunteer-based crowd-shipping system for rescuing surplus meals from restaurants and delivering them to homeless shelters in Arlington, Texas. Each iteration of the model’s development has sought to improve model realism by incorporating empirical data to strengthen underlying assumptions. This paper describes the most recent iteration, in which a method is presented for selecting eligible volunteers crowd-shippers based on total trip duration, derived from real-time traffic data. Preliminary experimental results illustrate the impact of adding trip duration constraints and increasing the size of the modeled region on model behavior, as well as illuminating the need for further analysis. Technical Session, Introductory Tutorial · Introductory Tutorials [In person] A Gentle Introduction To Bayesian Optimization Chair: Daniel Otero-Leon (University of Michigan) A Gentle Introduction To Bayesian Optimization Antonio Candelieri (University of Milano-Bicocca) Abstract Abstract Bayesian optimization is a sample efficient sequential global optimization method for black-box, expensive and multi-extremal functions. It generates, and keeps updated, a probabilistic surrogate model of the objective function, depending on the performed evaluations, and optimizes an acquisition function to choose a new point to evaluate. The acquisition function deals with the exploration-exploitation dilemma depending on surrogate’s predictive mean and uncertainty. Many alternatives are available offering different trade-off mechanisms; different options are also possible for the probabilistic surrogate model: Gaussian Process regression is best suited for optimization over continuous search spaces while other approaches, such as Random Forests or Gaussian Prcesses with ah-hoc kernels, deal with complex search spaces spanned by nominal, numeric and conditional variables. This tutorial offers an introduction to these topics and a discussion on available tools, real-life applications, and recent advances, such as unknown constraints, multi-information sources and cost-awareness, and multi-objective optimization. Technical Session · Manufacturing Applications [In person] Warehousing Building a Digital Twin for Robot Workcell Prognostics and Health Management Deogratias Kibira (National Institute of Standards and Technology, University of Maryland) and Guodong Shao and Brian A. Weiss (National Institute of Standards and Technology) Abstract Abstract The application of robot workcells increases the efficiency and cost effectiveness of manufacturing systems. However, during operation, robots naturally degrade leading to performance deterioration. Monitoring, diagnostics, and prognostics (collectively known as prognostics and health management (PHM)) capabilities enable required maintenance actions to be performed in a timely manner. Noting the importance of data-based decisions in many systems, effective PHM should be based on the analysis of data. The main challenges with robot PHM are difficulties of relating data to healthy and unhealthy states, and lack of models to fuse and analyze up-to-date data to predict the future state of the robot. This paper describes concepts of digital twin development to overcome the above challenges. A use case of a digital twin modeling robot tool center point accuracy is provided. The proposed procedure will be applicable to other use cases such as modeling reduced robot repeatability or increased power consumption. Vendor [In person] Rockwell Automaton Vendor Workshop Technical Session · Logistics, Supply Chains, and Transportation [In person] Support Development of Control Systems Chair: Leon McGinnis (Georgia Institute of Technology) Designing and Implementing Operational Controllers for A Robotic Tote Consolidation Cell Simulation Leon McGinnis, Shannon Buckley, and Ali V. Barenji (Georgia Institute of Technology) Abstract Abstract Operational control is a key driver of production system performance, yet the design of operational controllers is not well-covered in the production systems simulation literature. With a robotic cell consolidating totes for delivery in a logistics hub as the use case, we describe the design of the cell’s operational controller and an implementation approach suitable for use in an AnyLogic™ hybrid agent-discrete event simulation. Research motivation is discussed. Design principles are clearly explained, and key aspects of implementation for AnyLogic™ are presented. Implications for the engineering of operational controllers in digital twins are addressed. Scheduling and Controlling Multiple Vehicles on a Common Track in High-Powered Automated Vehicle Storage and Retrieval Systems Andreas Habl, Andrei Evseev, and Johannes Fottner (Technische Universität München) Abstract Abstract The ongoing trend towards increased throughput capacity and scalability of automated warehouses is pushing the application of rail-guided automated vehicle storage and retrieval systems (AVSRSs). By deploying more vehicles on each tier and lifting system, the performance and adaptability of these systems can be further increased. However, the transformation into high-powered AVSRSs requires advanced algorithms to run the system in a robust and efficient manner. In this work, an approach to schedule and control multiple vehicles on a common track by using a tier of a high-powered AVSRS is presented. By running computational experiments, it can be shown that the developed algorithm enables a collision-free and fast execution of transport tasks. By deploying several vehicles on a tier, the completion time of the transport tasks can be decreased and, hence, the throughput capacity of the tier can be increased. A Simulation-based Participatory Modelling Framework For Stakeholder Involvement In Urban Logistics Amita Singh, Magnus Wiktorsson, Jannicke Baalsrud Hauge, and Seyoum Eshetu Birkie (KTH Royal Institute of Technology) Abstract Abstract The popularity of both computer-based simulations and participatory modelling individually have supported design and research of many case studies. However, not much work has been done in the collaborative area wherein both the decision-making tools are used together for problem solving in the domain of urban logistics and the peer-reviewed literature on it remains sparse. This paper suggests a combination of the two fields for developing research in the area of development of urban logistics intensifying sustainability. In response to the requirements of simulation-based participatory modelling, we present a generic framework for developing these models. The framework facilitates dialogue among stakeholders with the help of a participation scheme which defines the level of participation of each stakeholder. Though the framework is presented in context of simulation-based participatory modelling, it can be easily extended to other modelling techniques. Technical Session · Analysis Methodology [In person] Estimation Methodology 1 Chair: Shengyi He (Columbia University) Higher-Order Coverage Error Analysis for Batching and Sectioning Shengyi He and Henry Lam (Columbia University) Abstract Abstract While batching and sectioning have been widely used in simulation, it is open regarding their higher-order coverage behaviors and whether one is better than the other in this regard. We develop techniques to obtain higher-order coverage errors for sectioning and batching. We theoretically argue that none of batching or sectioning is uniformly better than the other in terms of coverage, but sectioning usually has a smaller coverage error when the number of batches is large. We also support our theoretical findings via numerical experiments. Sufficient Conditions for a Central Limit Theorem to Assess the Error of Randomized Quasi-Monte Carlo Methods. Marvin K. Nakayama (New Jersey Institute of Technology) and Bruno Tuffin (Inria Rennes, University of Rennes) Abstract Abstract Randomized quasi-Monte Carlo (RQMC) can produce an estimator of a mean (i.e., integral) with root-mean-square error that shrinks at a faster rate than Monte Carlo’s. While RQMC is commonly employed to provide a confidence interval (CI) for the mean, this approach implicitly assumes that the RQMC estimator obeys a central limit theorem (CLT), which has not been established for most RQMC settings. To address this, we provide various conditions that ensure an RQMC CLT, as well as an asymptotically valid CI, and examine the tradeoffs in our restrictions. Our sufficient conditions, depending on the regularity of the integrand, often require that the number of randomizations grows sufficiently fast relative to the number of points used from the low-discrepancy sequence. Advanced Tutorial · Advanced Tutorials [In person] Reflections on Simulation Optimization Chair: Zeyu Zheng (Stanford University) Reflections on Simulation Optimization Shane G. Henderson (Cornell University) Abstract Abstract I provide some perspectives on simulation optimization. First, more attention should be devoted to the finite-time performance of solvers than on ensuring convergence properties that may only arise in asymptotic time scales that may never be reached in practice. Both analytical results and computational experiments can further this goal. Second, so-called sample-path functions can exhibit extremely complex behavior that is well worth understanding in selecting a solver and its parameters. Third, I advocate the use of a layered approach to formulating and solving optimization problems, whereby a sequence of models are built and optimized, rather than first building a simulation model and only later "bolting on" optimization. Technical Session · Simulation Optimization [In Person] Applications & Related Methods Chair: Kimberly Holmgren (Georgia Institute of Technology) Customer Path Generation Simulation for Selection From Proposed Grocery Store Layouts Kimberly Holmgren (Georgia Institute of Technology) Abstract Abstract Before a grocery store opens key operational decisions must be made with no historical data. One important decision is how to optimally lay out the store to maximize consumer spending. This work reviews existing literature on simulation to optimize grocery store layout, uses computer vision techniques to transform a store diagram into a digital representation, and applies simulation methods to approximate which of several layouts proposed by a store designer would result in the highest amount of impulse purchasing. Output analysis methods are used to compare these results to determine whether one design outperforms the others. SPSC: an efficient, general-purpose execution policy for stochastic simulations Yu-Lin HUANG, Gildas Morvan, Frédéric Pichon, and David Mercier (Université d'Artois) Abstract Abstract In this paper, we present a stochastic simulation execution policy named SPSC from its different steps: Simulation, Partitioning, Selection, Cloning. It does not require prior knowledge on the model to be applied and allows one to estimate efficiently the probabilities of possible {simulation outcomes, so called solutions}. It is evaluated on three different multi-agent-based simulations. Results show that SPSC could be a relevant alternative to the Monte Carlo method. Technical Session · Simulation Optimization [In person] Global Search Methods Chair: Giulia Pedrielli (Arizona State University) Getting to "rate-optimal" in Ranking & Selection Harun Avci, Barry L. Nelson, and Andreas Waechter (Northwestern University) Abstract Abstract In their 2004 seminal paper, Glynn and Juneja formally and precisely established the rate-optimal, probability-of-incorrect-selection, replication allocation scheme for selecting the best of k simulated systems. In the case of independent, normally distributed outputs this allocation has a simple form that depends in an intuitively appealing way on the true means and variances. Of course the means and (typically) variances are unknown, but the rate-optimal allocation provides a target for implementable, dynamic, data-driven policies to achieve. In this paper we compare the empirical behavior of four related replication-allocation policies: mCEI from Chen and Rzyhov and our new gCEI policy that both converge to the Glynn and Juneja allocation; AOMAP from Peng and Fu that converges to the OCBA optimal allocation; and TTTS from Russo that targets the rate of convergence of the posterior probability of incorrect selection. We find that these policies have distinctly different behavior in some settings. Partitioning and Gaussian Processes for Accelerating Sampling in Monte Carlo Tree Search for Continuous Decisions Menghan Liu, Yumeng Cao, and Giulia Pedrielli (Arizona State University) Abstract Abstract We propose Part-MCTS for sampling continuous decisions at each stage of a Monte Carlo Tree Search algorithm. At each MCTS stage, Part-MCTS sequentially partitions the decision space and keeps a collection of Gaussian processes to describe the landscape of the objective function. A classification criteria based on the estimation of the minimum allows us to focus the attention on regions with better predicted behavior, reducing the evaluation effort elsewhere. Within each subregion, we can use any sampling distribution, and we propose to sample using Bayesian optimization. We compare our approach to KR-UCT (Yee et al. 2016) as state of the art competitor. Part-MCTS achieves better accuracy over a set of nonlinear test functions, and it has the ability to identify multiple promising solutions in a single run. This can be important when multiple solutions from a stage can be preserved and expanded at subsequent stages. Treed-Gaussian processes with Support Vector Machines as nodes for nonstationary Bayesian optimization Antonio Candelieri (Universita Milano Bicocca) and Giulia Pedrielli (Arizona State University) Abstract Abstract A large family of black box methods rely on surrogates of the unknown, possibly non linear non convex reward function. While it is common to assume stationarity of the reward, many real-world problems satisfy this assumption only locally, hindering the spread application of such methods. This paper proposes a novel nonstationary regression model combining Decision Trees and Support Vector Machine (SVM) classification for a hierarchical non-axis-aligned partition of the input space. Gaussian Process (GP) regression is performed within each identified subregion. The resulting nonstationary regression model is the Treed Gaussian process with Support Vector Machine (SVMTGP), and we investigate the sampling efficiency from using our a model within a Bayesian optimization (BO) context. Empirically, we show how the resulting algorithm, SVMTGP-BO never underperforms BO when this is applied to an homogeneous Gaussian process, while it shows always better performance compared to the homogeneous model with nonlinear functions with complex landscapes. Advanced Tutorial · Advanced Tutorials [In person] Thinking Inside the Box: A Tutorial on Grey-Box Bayesian Optimization Chair: Russell R. Barton (Pennsylvania State University) Thinking Inside the Box: A Tutorial on Grey-Box Bayesian Optimization Raul Astudillo and Peter Frazier (Cornell University) Abstract Abstract Bayesian optimization (BO) is a framework for global optimization of expensive-to-evaluate objective functions. Classical BO methods assume that the objective function is a black box. However, internal information about objective function computation is often available. For example, when optimizing a manufacturing line's throughput with simulation, we observe the number of parts waiting at each workstation, in addition to the overall throughput. Recent BO methods leverage such internal information to dramatically improve performance. We call these "grey-box" BO methods because they treat objective computation as partially observable and even modifiable, blending the black-box approach with so-called "white-box" first-principles knowledge of objective function computation. This tutorial describes these methods, focusing on BO of composite objective functions, where one can observe and selectively evaluate individual constituents that feed into the overall objective; and multi-fidelity BO, where one can observe cheaper approximations of the objective function by varying parameters of the evaluation oracle. Advanced Tutorial · Advanced Tutorials [In person] A Tutorial on How to Connect Python with Different Simulation Software to Develop Rich Simheuristics Chair: Andrea D'Ambrogio (University of Roma TorVergata) A Tutorial on How to Connect Python with Different Simulation Software to Develop Rich Simheuristics Mohammad Peyman (Universitat Oberta de Catalunya); Mohammad Dehghanimohammadabadi (Northeastern University); and Pedro Copado-Mendez, Javier Panadero, and Angel A. Juan (Universitat Oberta de Catalunya) Abstract Abstract Simulation is an excellent tool to study real-life systems with uncertainty. Discrete-even simulation (DES) is a common simulation approach to model time-dependent and complex systems. Therefore, there are a variety of commercial (Simio, AnlyLogic, Simul8, Arena, etc.) and non-commercial software packages that enable users to take advantage of DES modeling. These tools are capable of modeling real-life systems with high accuracy, they generally fail to conduct advanced analytical analysis or complicated optimization (i.e. simheuristics). Therefore, coupling these DES with external programming languages like Python offers additional mathematical operations and algorithmic flexibility. This integration makes the simulation modeling more intelligent and extends its applicability to a broader range of problems. This study aims to provide a step-wise tutorial for helping simulation users to create intelligent DES models by integrating them with Python. Multiple demo examples are discussed to provide insights and making this connection based on commercial and non-commercial DES packages. Technical Session · Simulation Optimization [In person] Local & Gradient-Based Search Methods Chair: David Eckman (University of Pittsburgh) Improved Complexity of Trust-region Optimization for Zeroth-order Stochastic Oracles with Adaptive Sampling Yunsoo Ha and Sara Shashaani (North Carolina State University) and Quoc Tran-Dinh (University of North Carolina at Chapel Hill) Abstract Abstract We present an enhanced stochastic trust-region optimization with adaptive sampling (ASTRO-DF) in which optimizing an iteratively constructed local model on estimates of objective values with stochastic sample size guides the search. The noticeable feature is that the underdetermined quadratic model with a diagonal Hessian requires fewer function evaluations, which is particularly useful at high dimensions. This paper describes the enhanced algorithm in detail. It gives several theoretical results, including iteration complexity, and renders almost sure convergence guarantees. We report in our numerical experience the finite-time superiority of the enhanced ASTRO-DF over state-of-the-art using the SimOpt library. Inexact-Proximal Accelerated Gradient Method for Stochastic Nonconvex Constrained Optimization Problems Morteza Boroun and Afrooz Jalilzadeh (The University of Arizona) Abstract Abstract Stochastic nonconvex optimization problems with nonlinear constraints have a broad range of applications in intelligent transportation, cyber-security, and smart grids. In this paper, first, we propose an inexact-proximal accelerated gradient method to solve a nonconvex stochastic composite optimization problem where the objective is the sum of smooth and nonsmooth functions, the constraint functions are assumed to be deterministic and the solution to the proximal map of the nonsmooth part is calculated inexactly at each iteration. We demonstrate an asymptotic sublinear rate of convergence for stochastic settings using increasing sample-size considering the error in the proximal operator diminishes at an appropriate rate. Then we customize the proposed method for solving stochastic nonconvex optimization problems with nonlinear constraints and demonstrate a convergence rate guarantee. Numerical results show the effectiveness of the proposed algorithm. Flat Chance! Using Stochastic Gradient Estimators to Assess Plausible Optimality for Convex Functions David J. Eckman (Texas A&M University) and Matthew Plumlee and Barry L. Nelson (Northwestern University) Abstract Abstract This paper studies methods that identify plausibly near-optimal solutions based on simulation results obtained from only a small subset of feasible solutions. We do so by making use of both noisy estimates of performance and their gradients. Under a convexity assumption on the performance function, these inference methods involve checking only a system of inequalities. We find that these methods can yield more powerful inference at less computational expense compared to methodological predecessors that do not leverage stochastic gradient estimators. Technical Session · Covid-19 and Epidemiological Simulations [In person] Case studies of COVID-19 impacts and interventions Chair: Miguel Mujica Mota (Amsterdam University of Applied Sciences) DeepABM: Scalable and Efficient Agent-Based Simulations via Geometric Learning Frameworks - A Case Study for COVID-19 Spread and Interventions Ayush Chopra (MIT); Esma Gel (Arizona State University); Jayakumar Subramanian and Balaji Krishnamurthy (Adobe India); Santiago Romero-Brufau, Kalyan Pasupathy, and Thomas Kingsley (Mayo Clinic); and Ramesh Raskar (MIT) Abstract Abstract We introduce DeepABM, a computational framework for agent-based modeling that leverages a graph convolutional framework for simulating action and interactions over large agent populations. Using DeepABM allows simulations of large populations in real-time and running them efficiently on GPU architectures. DeepABM-COVID has been developed to provide support for various non-pharmaceutical interventions(quarantine, exposure notification, vaccination, testing) for the COVID-19 pandemic, and can scale to populations of representative size in real-time on a GPU. DeepABM-COVID can model 200 million interactions (over 100,000 agents across 180 time-steps) in 90 seconds, and is made available online to help researchers with modeling and analysis of various interventions. We explain various components of the framework and discuss results from one research study to evaluate the impact of delaying the second dose of the COVID-19 vaccine in collaboration with clinical and public health experts. BPMN-Based Simulation Analysis of the COVID-19 Impact on Emergency Departments: a Case Study in Italy Carole Neuner (University of Rome Tor Vergata) and Paolo Bocciarelli and Andrea D'Ambrogio (University of Roma Tor Vergata) Abstract Abstract The COVID-19 outbreak, which has been recognized as a pandemic in March 2020, has brought the need to timely face an extraordinary demand of health-related resources and medical assistance. The objective of this work is to analyze the structural and procedural changes that have been enacted in an emergency department (ED), according to guidelines provided by national authorities. Specifically, guidelines deal with how to manage the access of COVID-19 patients, ensure the isolation of suspected cases, execute a proper triage, and identify the appropriate treatment path for all patients. The paper describes a process modeling and simulation-based approach to analyze the treatment of patients accessing the ED of an Italian hospital. The approach makes use of the Business Process Model and Notation standard to specify ED treatment processes before and during the pandemic, so to evaluate different scenarios and effectively support process improvement activities by use of simulation-based what-if analysis. Covid-19-related Challenges for New Normality in Airport Terminal Operations Michael Schultz (Technische Universität Dresden), Miguel Mota (Amsterdam University of Applied Sciences), Mingchuan Luo (Technische Universität Dresden), Paolo Scala (Amsterdam University of Applied Sciences), and Daniel Lubig (Technische Universität Dresden) Abstract Abstract Airport operations are undergoing significant change, having to meet pandemic requirements in addition to intrinsic security requirements. Although air traffic has declined massively, airports are still the critical hubs of the air transport network. The new restrictions due to the COVID-19 pandemic pose new challenges for airport operators in redesigning airport terminals and managing passenger flows. To evaluate the impact of COVID-19 restrictions, we implement a reference airport environment. In this \emph{Airport in the Lab} environment we will demonstrate the operational consequences derived from the new operational requirements. In addition, countermeasures to mitigate any negative impacts of these changes are tested. The results highlight emerging issues that the airport will most likely face and possible solutions. Finally, we could apply the findings and lessons learned from our testing at our reference airport to a real airport. Technical Session · MASM: Semiconductor Manufacturing [In person] Production Planning for Wafer Fabs Chair: Lars Moench (University of Hagen) Predicting Cycle Time Distributions With Aggregate Modelling Of Work Areas In A Real-World Wafer Fab Patrick Deenen (Eindhoven University of Technology, Nexperia); Jelle Adan (Nexperia); and John Fowler (Arizona State University) Abstract Abstract In a semiconductor wafer fabrication facility (wafer fab) it is important to accurately predict the remaining cycle time of the wafers in process, so-called wafer outs. A wafer fab consists of multiple work areas which its main building blocks. Therefore, to accurately predict the wafer outs, an accurate prediction of the cycle time distribution at each work area is essential. This paper proposes an aggregate model to simulate each of these work areas. The aggregate model is a single server with an aggregate process time distribution and an overtaking distribution. Both distributions are WIP-dependent, but an additional layer-type dependency is introduced for the overtaking distribution. Application on a real-world wafer fabrication facility of a semiconductor manufacturer is presented for the work areas of photolithography, oxidation and dry etch. These experiments show that the aggregate model can accurately predict the cycle time distributions in work areas by layer-type. An Optimization Framework for Managing Product Transitions in Semiconductor Manufacturing Carlos Leca (North Carolina State University), Karl Kempf (Intel Corporation), and Reha Uzsoy (North Carolina State University) Abstract Abstract The highly competitive nature of high-technology industries such as semiconductor manufacturing requires firms to constantly refresh their product portfolios with new products. Product divisions, which are responsible for product specification and demand forecasting, must collaborate with manufacturing and product engineering groups to develop new products and bring them into high-volume production for sale. We present a centralized optimization model for resource allocation in this multi-agent complex environment that will serve as a basis for the development of decentralized solution approaches. Computational experiments indicate that the proposed model captures the interactions between agents in a logically consistent manner, providing a basis for decentralized approaches and stochastic formulations. Data-driven Production Planning Formulations for Wafer Fabs: A Computational Study Tobias Voelker and Lars Moench (University of Hagen) Abstract Abstract Cycle times are of order of ten weeks in most semiconductor wafer fabrication facilities (wafer fabs). They have to be explicitly considered in production planning. A nonlinear relation between resource workload and cycle time can be observed. In this paper, we study data-driven (DD) production planning formulations. These formulations are based on a set of system states representing the congestion behavior of the wafer fab with work in process (WIP) and resulting output levels. The effects of different WIP-output relations and capacity constraints in the DD models are investigated. Moreover, several methods are proposed to obtain representative sets of system states. The performance of the DD variants is compared with the performance of the allocated clearing function (ACF) model using a scaled-down simulation model of a wafer fab. Simulation results demonstrate that under certain experimental conditions, the DD models lead to similar profit and cost values as the ACF model. Technical Session · Healthcare Applications [In person] Long term planning for chronic diseases Chair: Priscille Koutouan (North Carolina State University) Using Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease Daniel Felipe Otero-Leon, Weiyu Li, Mariel S. Lavieri, Brian T. Denton, Jeremy B. Sussman, and Rodney Hayward (University of Michigan) Abstract Abstract Continuous tracking of patient's health data through electronic health records (EHRs) has created an opportunity to predict healthcare policies' long-term impacts. Despite the advances in EHRs, data may be missing or sparsely collected. In this article, we use EHR data to develop a simulation model to test multiple treatment guidelines for cardiovascular disease (CVD) prevention. We use our model to estimate treatment benefits in terms of CVD risk reduction and treatment harms due to side effects, based on when and how much medication the patients are exposed to over time. Our methodology consists of using the EM algorithm to fit sparse health data and a discrete-time Monte-Carlo simulation model to test guidelines for different patient demographics. Our results suggest that, among published guidelines, those that focus on reducing CVD risk are able to reduce treatment without increasing the risk of severe health outcomes. Creating Simulated Equivalents to Project Long-term Population Health Outcomes of Underserved Patients: an Application to Colorectal Cancer Screening Priscille Koutouan and Maria Mayorga (North Carolina State University) and Meghan O'Leary and Kristen Hassmiller Lich (University of North Carolina) Abstract Abstract Simulation models can be used to project the impact of interventions on long-term population health outcomes. To project the value of an intervention in a specific population, the model must be able to simulate individuals with similar characteristics and pathways as the population receiving the intervention. We aimed to estimate the long-term colorectal cancer (CRC) outcomes (cancers and deaths averted, life-years gained) associated with receipt of a first CRC screening through the Colorectal Cancer Control Program (CRCCP) among low-income and underserved patients in the U.S. We recalibrate a simulation model previously calibrated based on a real-world mix of insurance and demographic factors for a particular state. We describe our strategy for developing simulated equivalents in terms of demographics, natural history, and CRC screening results for the CRCCP patients and matching these patients to their simulated equivalents. We then project lifetime CRC incidence and mortality with and without intervention. Exploring Market Segment Assignment Strategies to Monopsonistic Entities in a Hypothetically Coordinated Vaccine Market Bruno Alves Maciel and Ruben A. Proano (Rochester Institute of Technology) Abstract Abstract This study presents a simulation-optimization approach for implementing the first stage of a four-stage optimization framework used to simulate a hypothetically coordinated vaccine market. The study's overall goal is to optimally make procurement decisions that result in more affordable vaccines for the buyers and profitable for their producers. In the initial stage, groups of market segments are assigned to coordinating entities that will make optimal procurement decisions under tiered pricing and pool procurement mechanisms. We explore nine different market-to-entity assignment policies through variants of a min-max optimization problem that mimics assignment policies with varying levels of cooperation among market segments. Our results show that market segments with low purchasing power can maximize their savings if they procure together with market segments with higher purchasing power. Additionally, market assignments that result in coordinating entities serving similar size populations mitigate the profit reduction of transferring savings to low-income market segments. Technical Session · Manufacturing Applications [In person] MA2 Chair: Ana Moreira (Auburn University) Applying Discrete-Event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant Ana Carolina M. Moreira and Daniel F. Silva (Auburn University) Abstract Abstract This paper aims to apply a combination of Value Stream Mapping (VSM) and Discrete-Event Simulation (DES) in an automotive engine manufacturing plant. First, a current state VSM was created and the sources of waste were identified. The leak-test area and engine impregnation process were identified as major sources of waste. Based on that, two potential improvement scenarios were developed and analyzed using DES. The simulation was used to compare key measures of performance in the current state and the proposed scenarios, using different setting for adjustable system parameters. Results showed improvements of up to 29% in annual engine impregnation cost for one scenario, without detriment to other measures. The study’s major takeaway is demonstrating that VSM in conjunction with DES is a powerful alternative in studying changes in production processes, which leverages the advantages of both methodologies. Initial Assessment of the Influence of Robustness on the Weighted Tardiness for a Scheduling Problem with High Demand Volatility Based on a Simulation Model Ziga Letonja, Nikolaus Furian, Johannes Pan, and Vössner Siegfried (Graz University of Technology) and Melanie Reuter-Oppermann (Technical University of Darmstadt) Abstract Abstract Nowadays, customer behavior changes rapidly, resulting in high demand volatility throughout immensely globalized supply chains, especially from key-account customers. Hence, requiring not only excellent products produced in the shortest time but also abiding to defined due dates. In this paper, we propose a multi-objective large neighborhood search (MO-LNS) algorithm with a frozen period focusing solely on minimizing the total weighted tardiness of a two-stage production with random rush orders. The total weighted tardiness is set as the primary objective function and the robustness as the secondary one. The algorithm is implemented in a discrete event simulation (DES) model to observe the effects of varying objective function weights at different levels of order dynamism. The results suggest that there is not a single dominant weight pair, but rather a range, which dominates others. Thus, using a pair from the dominant range increases the likelihood of reducing the tardiness over a period. Plenary · Plenary [In person] Military Keynote Chair: Nathaniel D. Bastian (United States Military Academy) Digital Discovery within the Air Force Enterprise Modeling, Simulation and Analysis Ecosystem David Panson (Air Force Office of Strategic Development Planning and Experimentation (SDPE)) Abstract Abstract The Air Force has put significant effort into defining, creating, and building a Modeling, Simulation and Analysis (MS&A) ecosystem to support acquisition, training, and operations related to current and future warfighting capabilities. The MS&A ecosystem includes the foundational data architecture required to support advanced MS&A capabilities to ensure acquisition decisions are aligned to the Air Force Digital Campaign, and it is leveraging model-based systems engineering (MBSE) and digital engineering activities to enable an efficient and effective environment for greatly improving the overall acquisition lifecycle. The MBSE and MS&A tools are aligned and integrated into a collaborative environment, quickly becoming the way forward for Air Force capability in a digitally connected domain. The presentation will cover the overall ecosystem and the digital connectivity within and across the environment. Additionally, specific examples will be included to understand the application and implication of using the MS&A ecosystem to make better decisions. Technical Session · Military and National Security Applications [In person] Rockets, Active Shooter Defeat System, and Violence Modeling Chair: Andrew Hall (Marymount University, Institute for Defense Analysis) Development of a Neural Network-Based Controller for Short-Range Rockets Raul de Celis and Luis Cadarso (Universidad Rey Juan Carlos) Abstract Abstract Improving accuracy is a critical component of rocket-based defense systems. Accuracy may become independent of range when using inertial navigation systems. This is especially true for short-range man-portable air-defense systems, which are usually composed of portable missiles, whose movement is governed by non-linear and rapidly changing forces and moments. Effective guidance strategies for these systems could improve the weapon's precision. This research introduces a new non-linear neural network-based controller to improve navigation and control systems by lowering the circle error probable, which is a measure of accuracy. Nonlinear simulations based on actual flight dynamics are used to train the neural networks. The simulation results show that the presented approach performs well in a 6-DOF simulation environment, featuring high accuracy and robustness against parameter uncertainty. Investigating an Active Shooter Defeat System with Simulation and Data Farming Charles Lovejoy (Recruiting Command, United States Army) and Mary McDonald, Thomas Lucas, and Susan Sanchez (Naval Postgraduate School) Abstract Abstract This research uses simulation and data farming to explore and quantify the effectiveness of an active shooter defeat system at reducing fatalities over a breadth of conditions. An agent-based simulation is created to model a hypothetical active shooting event at a school building in West Point, New York. The simulation is data farmed to explore factors that influence the number of fatalities with and without the employment of a prototype active shooter defeat system known as the “Joint Active Shooter Protection and Response” (JASPR) system. Factors explored include the shooter’s entry point, whether the shooter suicides, the shooter’s rate and accuracy of fire, the number of bystanders in each section of the building, post-dispatch response time, and whether JASPR is present. Based on 45,000 simulated active shooting events, our results suggest that a well-designed system can significantly reduce fatalities. We present the conditions under which JASPR may be most effective. Technical Session · Simulation and Philosophy [In person] Real World Ethical Implications for Analysis Chair: Andreas Tolk (The MITRE Corporation) These two invited papers focusing on epistemological and ethical challenges of using simulation for decision making. While the application domain is defense, the implications are generalizable. The presentation will be followed by a 30 min discussion of the paper and the implications with the auditorium. Sic Semper Simulation -- Balancing Simplicity and Complexity in Modeling and Analysis Ernie Page (MITRE Corporation) and James Thompson and Matthew Koehler (The MITRE Corporation) Abstract Abstract Determining the level of detail necessary to a modeling effort is fundamental to the discipline. Insufficient detail can limit a model’s utility. Likewise, extraneous detail may impact the runtime performance of the model, increase its maintenance burden, impede the model validation process by making the model harder to understand than necessary, or overfit the model to a specific scenario. Intuition suggests that resolving this tension is an intractable challenge that reflects the art of modeling and is without promise for general solution. Most analytic communities accept that a truly rigorous, repeatable, engineering solution to the construction of an arbitrary model is unattainable. But the long history of research in modeling methodology suggests there are useful steps communities can make in that direction. Through the lens of current modeling challenges, practices and methods in several domains, we hope to add to this important discussion at the intersection of philosophy and engineering. A New Ethical Principle for Analysts Who Use Models Paul Davis (RAND Corporation, Pardee RAND Graduate School) Abstract Abstract This paper reviews some existing ethical principles applying to modelers and analysts. It then proposes a new principle motivated by modern advances that allow modeling and analysis to confront uncertainty—even deep uncertainty—and to do so effectively. Given these advances and the high stakes that are often involved, analysts have an obligation to convey more information than has been expected in the past—information to help decisionmakers choose strategies that will hedge as well as feasible against uncertainties. Using dilemmas familiar to analysts, including some that draw on topical events, the paper then discusses challenges involved in following the principle and suggest tactics that can help in doing so. Panel · Using Simulation to Innovate [In person] Panel Discussion on Simulation and AI Chair: Susan M. Sanchez (Naval Postgraduate School) Using Simulation and Artificial Inteligence to Innovate - Are We Getting Even Smarter? Simon J. E. Taylor (Brunel University London), Juergen Branke (Warwick Business School), Oliver Rose (Institute of Applied Computer Science), Young-Jun Son (University of Arizona), and Susan Sanchez (Naval Postgraduate School) Abstract Abstract Artificial Intelligence (AI) is spreading into all walks of life and across disciplines. It is being used to explain, predict and optimize, often by creating and experimenting with models derived from data. Arguably, Modeling & Simulation (M&S) has doing this since the late 1950s. Our approaches are different to those used in AI but have some overlap. Both AI and simulation bring significant, different and potentially complementary benefits to end users. However, the majority of work is separate. Is there potential for innovation by bringing together these fields and their associated techniques? This panel explores the potential synergies of these relationships and considers major opportunities and the barriers to realization. Technical Session · Military and National Security Applications [In person] Multi-Agent Reinforcement Learning, Generative Methods, and Bayesian Neural Networks Chair: Andrew Hall (Marymount University, Institute for Defense Analysis) Challenges and Opportunities for Generative Methods in the Cyber Domain Marc Chale (Air Force Institute of Technology) and Nathaniel Bastian (United States Military Academy) Abstract Abstract Large, high quality data sets are essential for training machine learning models to perform their tasks accurately. The lack of such training data has constrained machine learning research in the cyber domain. This work explores how Markov Chain Monte Carlo (MCMC) methods can be used for realistic synthetic data generation and compares it to several existing generative machine learning techniques. The performance of MCMC is compared to generative adversarial network (GAN) and variational autoencoder (VAE) methods to estimate the joint probability distribution of network intrusion detection system data. A statistical analysis of the synthetically generated cyber data determines the goodness of fit, aiming to improve cyber threat detection. The experimental results suggest that the data generated from MCMC fits the true distribution approximately as well as the data generated from GAN and VAE; however, the MCMC requires a significantly longer training period and is unproven for higher dimensional cyber data. Robust Decision-Making in the Internet of Battlefield Things Using Bayesian Neural Networks Adam Cobb and Brian Jalaian (U.S. Army Research Laboratory), Nathaniel Bastian (United States Military Academy), and Stephen Russell (U.S. Army Research Laboratory) Abstract Abstract The Internet of Battlefield Things (IoBT) is a dynamically composed network of intelligent sensors and actuators that operate as a command and control, communications, computers, and intelligence complex-system with the aim to enable multi-domain operations. The use of artificial intelligence can help transform the IoBT data into actionable insight to create information and decision advantage on the battlefield. In this work, we focus on how accounting for uncertainty in IoBT systems can result in more robust and safer systems. Human trust in these systems requires the ability to understand and interpret how machines make decisions. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In this work, we focus on the machine learning task of classifying vehicles from their audio recordings, comparing deterministic convolutional neural networks (CNNs) with Bayesian CNNs to show that correctly estimating the uncertainty can help lead to robust decision-making in IoBT. Graph Neural Network Based Behavior Prediction to Support Multi-Agent Reinforcement Learning in Military Training Simulations Lixing Liu, Nikolos Gurney, Kyle McCullough, and Volkan Ustun (Institute for Creative Technologies, University of Southern California) Abstract Abstract We introduce a computational behavioral model for non-player characters (NPCs) that endows them with the ability to adapt to their experiences --- including interactions with human trainees. Most existing NPC behavioral models for military training simulations are either rule-based or reactive with minimal built-in intelligence. Such models are unable to adapt to the characters' experiences, be they with other NPCs, the environment, or human trainees. Multi-agent Reinforcement Learning (MARL) presents opportunities to train adaptive models for both friendly and opposing forces to improve the quality of NPCs. Still, military environments present significant challenges since they can be stochastic, partially observable, and non-stationary. We discuss our MARL framework to devise NPCs exhibiting dynamic, authentic behavior and introduce a novel Graph Neural Network based behavior prediction model to strengthen their cooperation. We demonstrate the efficacy of our behavior prediction model in a proof-of-concept multi-agent military scenario. Advanced Tutorial · Advanced Tutorials [In person] Business Process Modeling and Simulation with DPMN: Processing Activities Chair: Dehghani Mohammad (Northeastern University) Business Process Modeling and Simulation with DPMN: Processing Activities Gerd Wagner (Brandenburg University of Technology) Abstract Abstract The Business Process Modeling Notation (BPMN) has been established as a modeling standard in Business Process (BP) Management. However, BPMN lacks several important elements needed for BP simulation and is not well-aligned with the Queueing Network paradigm of Operations Research and the related BP simulation paradigm pioneered by the Discrete Event Simulation (DES) languages/tools GPSS and SIMAN/Arena. The Discrete Event Process Modeling Notation (DPMN) proposed by Wagner (2018) is based on Event Graphs (Schruben 1983), which capture the DES paradigm of Event-Based Simulation. By allowing to make flowchart models of "processing processes" performed in processing networks, DPMN reconciles the BPMN approach with DES. DPMN is the first visual modeling language that supports all important DES approaches: event-based simulation, activity-based DES and Processing Network models, providing a foundation for harmonizing and unifying the many different terminologies/concepts and diagram languages of established DES tools. Technical Session · Simulation Education [In person] Simulation Education Chair: Andrew J. Collins (Old Dominion University); James F. Leathrum (Old Dominion University) A New M&S Engineering Program with a Base in Computer Engineering James Leathrum, Yuzhong Shen, and Oscar Gonzalez (Old Dominion University) Abstract Abstract The reality of the current academic climate, in particular faced with drops in enrollment over the next decade as a result from drops in birth rates, is forcing hard choices for small programs such as the Modeling & Simulation Engineering (M&SE) program at Old Dominion University (ODU). The quality of the program and its benefit to its constituents do not offset the impracticality of continuing such programs. Two primary options for such programs are closure or consolidation. ODU decided on the latter course of action for M&SE. Due to the computational nature of the existing program, a decision was made to place M&SE as a major under the Computer Engineering degree. This paper presents the justification for this decision, the resulting curriculum and the hard decisions made for it to fit under computer engineering. A discussion of feedback from the industrial advisory board for the existing M&SE program is included. Increased Need for Data Analytics Education in Support of Verification and Validation Christopher Lynch, Ross Gore, Andrew Collins, Steven Cotter, Gayane Grigoryan, and James Leathrum (Old Dominion University) Abstract Abstract Computational simulation studies utilize data to assist in developing models, including conducting verification and validation (V&V). Input modeling and V&V are, historically, difficult topics to teach and they are often only offered as a cursory introduction, leaving the practitioner to pick up the skills on the job. This problem of teaching is often a result of the inability to introduce realistic datasets into class examples because “real world” data tends to come in poorly formed datasets. In this paper, a case is made that teaching data analytics can help ease this problem. Data analytics is an approach that includes data wrangling, data mining, and exploratory analyses through visualization and machine learning. We provide a brief discussion on how data analytics has been be applied to computational modeling and simulation in the context of verification and validation, but also for input modeling as that gives a basis for validation. Commercial Case Study · Commercial Case Studies [In person] Factory Environment Chair: Yusuke Legard (MOSIMTEC) Model Predictive Control of Cooling System for an Existing Building Young-Sub Kim and Cheol-Soo Park (Seoul National University, Department of Architecture and Architectural Engineering) Abstract Abstract This commercial case study presents the development of a data-driven simulation model and its use for control optimization of HVAC cooling system for a building. Time series data is collected from building energy monitoring and automatic control system. Inputs and outputs are selected considering a causality to predict indoor air temperatures for next 20 minutes and to find optimal control variables of HVAC system. For three days’ validation, electric energy was saved by 31% compared to the existing rule-based control. Development of weather normalization process for commercial building energy benchmarking Young-Seo Yoo, Dong-Hyuk Yi, and Cheol-Soo Park (Seoul National University) Abstract Abstract Recently, IT companies provide building energy benchmarking services based on the presupposition that Energy Use Intensity (EUI, kWh/㎡·yr) is a good indicator of reliable building energy performance. However, EUIs of two buildings can differ if they are built in different climates even though the both buildings’ thermal characteristics are identical. In this regard, the authors proposes a weather normalization process for objective building energy benchmarking. For this purpose, the energy data was collected from EnergyPlus simulation for 76 locations, and a ‘building energy signature’ denoted by EUI* was extracted based on a non-linear regression between outdoor air temperature and EUI. It is found that under various climate conditions EUI* can adequately reflect the building energy characteristics better than the EUI. Real-time Model Predictive Heating Control for a Factory Building Using Lumped Approach Seon-Jung Ra, Han-Sol Shin, and Cheol-Soo Park (Seoul National University) Abstract Abstract This case study focuses on a Model-Predictive Control (MPC) result using predicted indoor air temperatures at multiple points in a factory building. Rather than resorting to a full-blown dynamic simulation model, the authors developed a lumped simulation model only using temperature sensor & HVAC system operation status because of the building with lack of information. It is found that the model can accurately predict the temperatures and then is beneficially used for optimal on/off control of 61 unit heaters installed in a factory building. Owing to the simulation model’s prediction, energy savings by 56.3% was realized, while the indoor air temperatures were maintained within comfortable ranges close to heating setpoint temperature. Commercial Case Study · Commercial Case Studies [In person] Data Science Chair: David T. Sturrock (Simio LLC) Generating Business Value through Simulation and Data Science Bipin Chadha (CSAA Insurance Group) Abstract Abstract One of the key challenges that businesses are facing is how to leverage the emerging tools and methods of data science to improve their business performance. Although there are many examples of the use of data science/machine learning techniques in specific sectors and in the areas of games, it is often not easy to translate that success in other sectors or use cases. We have had great success in achieving business value for many use cases by using data science techniques in combination with operations research techniques such as simulation and optimization. In this paper we will go over several case studies that show how simulation and optimization helps in overcoming many of the challenges associated with data science techniques. The methodology we use is easily extended to a wide range of industries and use cases and enables an organization to improve its decision making and generate business value. An Integrated Solution for Data Farming and Knowledge Discovery in Simulation Data: A Case Study of the Battery Supply of a Vehicle Manufacturer Jonas Genath, Soeren Bergmann, and Steffen Strassburger (TU Ilmenau) and Sven Spieckermann and Stephan Stauber (SimPlan AG) Abstract Abstract The development of logistics concepts, here for supplying an automobile production with batteries, is a major challenge, especially when there are uncertainties. In order to mitigate this, the method of Knowledge Discovery in Simulation Data is to be applied here. In order to enable the planners to easily use the method, a tool that can be easily integrated into practical use (SimAssist-4farm) was developed. Technical Session · MASM: Semiconductor Manufacturing [In person] Scheduling Applications in Semiconductor Manufacturing Chair: semya Elaoud (Flexciton Ltd) Multi-Objective Parallel Batch Scheduling In Wafer Fabs With Job Timelink Constraints Semya Elaoud, Ruaridh Williamson, Begun Efeoglu Sanli, and Dennis Xenos (Flexciton Limited) Abstract Abstract In this paper we consider multi-objective batch scheduling in the complex flexible job shop problem applied to semiconductor wafer fabs. Batches have different operating costs and consecutive steps of a job are constrained with time links. We also consider several other process aspects that arise in semiconductor wafer fabrication facilities such as flexible machine downtime, incompatible job families, different job sizes and parallel machines. The aim is to minimize the total weighted batching cost, queuing time, and the number of violations to timelink constraints. We present a hybrid two-stage solution strategy, combining Mixed Integer Linear Programming (MILP) models and heuristics. At a high-level, the proposed approach can be broken down into “constructive” and “improvement” steps. The comparison of Flexciton schedules evaluated under uncertainty against the actual factory schedules when solving large industrial instances shows the significant improvements that our solution can bring. An Evaluation of Strategies for Job Mix Selection in Job Shop Production Environments - Case : a Photolithography Workstation Amir Ghasemi (University of Limerick) and Cathal Heavey (university of limerick) Abstract Abstract In this research the impact of job mix selection in each production shift in a job shop production environment is examined. This is a critical question within photolithography workstations in semiconductor manufacturing systems. For this purpose, a recently developed Simulation Optimization (SO) method named Evolutionary Learning Based Simulation Optimization (ELBSO) is implemented to solve a set of designed Stochastic Job Shop Scheduling problems captured from a real semiconductor manufacturing data set. Experiment results indicate that the best performance in each shift occurs when machines are flexible in terms of processing different job operations, and the selected jobs for a certain shift have as equal as possible due dates. Technical Session · MASM: Semiconductor Manufacturing [In person] MASM 1 Chair: Raphael Herding (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) Simulation-based Performance Assessment of Sustainable Manufacturing Decisions Jens Rocholl and Lars Moench (University of Hagen) Abstract Abstract In this paper, we consider sustainable manufacturing decisions in semiconductor supply chains. A simulation-based framework is designed to assess such decisions in a dynamic and stochastic environment. Requirements for performance assessment of sustainable manufacturing decisions are derived in a first step. The architecture of the framework is then designed. We specify components that model the supplied energy and the energy consumption of the manufacturing processes. Moreover, a component for demand generation is described. A component that deals with modeling user preferences with respect to conventional and to new sustainability performance measures is also sketched. The framework is illustrated by assessing the performance of an energy-aware scheduling algorithm for batch processing machines in a rolling horizon setting. Design and Application of an Ontology for Demand Fulfillment in Semiconductor Supply Chains Raphael Herding (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule); Lars Moench (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., University of Hagen); and Hans Ehm (Infineon Technologies AG) Abstract Abstract Ensuring interoperability of different information systems for planning and control is a challenging task in semiconductor supply chains. This is partially caused by the sheer size of the involved production facilities and the supply chains in the semiconductor domain, the permanent appearance of uncertainty, and the rapid technological changes which lead to sophisticated planning and control systems in this domain. Ontologies are a promising approach to support interoperability among such systems. Demand fulfillment is an important function in semiconductor supply chains. However, at the same time, it is a planning function that is not very well understood. In the present paper, a domain- and task ontology for demand fulfillment is designed based on a domain analysis. The usage of the proposed ontology is illustrated by means of an example A Scalable Deep Learning-based Approach for Anomaly Detection in Semiconductor Manufacturing Simone Tedesco (Universita' degli Studi di Padova); Gian Antonio Susto (Università degli Studi di Padova); and Natalie Gentner, Andreas Kyek, and Yao Yang (Infineon Technologies AG) Abstract Abstract The diffusion of the Industry 4.0 paradigm lead to the creation and collection of huge manufacturing datasets; such datasets contain for example measurements coming from physical sensors located in different equipment or even in different productive manufacturing organizations. Such large and heterogeneous datasets represent a challenge when aiming for developing data-driven approaches like Anomaly Detection or Predictive Maintenance. In this work we present a new approach for performing Anomaly Detection that is able to handle heterogeneous data coming from different equipment, work centers or production sites. Technical Session · Military and National Security Applications [In person] Civil Infrastructure, Hostile Crowds and Military Modernization Chair: Nathaniel D. Bastian (United States Military Academy) Leveraging Network Interdependencies to Overcome Inaccessible Civil Infrastructure Data Brigham Moore, David Jacques, and Steven Schuldt (Air Force Institute of Technology) Abstract Abstract Data-driven decision making and expansion of smart city infrastructure require massive amounts of data that might not be available. The lack of infrastructure data can make it challenging to recover interdependent infrastructure systems following a disruption. Interdependent infrastructure systems are often modeled as networks with an interdependency parameter. Researchers can partially overcome gaps in data associated with the individual networks by modifying interdependency parameters to include interdependency type and coupling strategy information. Overcoming missing telecommunications data is illustrated using a combined network design and scheduling problem with a modified interdependency parameter. The modified parameter allows analysis without a full dataset and removes the necessity of adding constraints and variables to handle complex infrastructure relationships. The difference in system operability results from partial and full datasets is less than or equal to 2.6%. This modeling method provides an interim solution to full data acquisition and may be suitable in other applications. Toward Better Management of Potentially Hostile Crowds Susan K. Aros, Anne Marie Baylouny, Deborah E. Gibbons, and Mary McDonald (Naval Postgraduate School) Abstract Abstract The U.S. Capitol protest and siege in January 2021 provides a vivid demonstration of the challenges posed by managing potentially hostile crowds. Individuals in these crowds are organized into identity groups. Crowd participants’ emotions, beliefs, objectives and group affiliations are dynamic. Security forces managing such crowds are tasked with the weighty decisions of tactical rules-of-engagement and choice of weapons. We have developed an agent-based simulation modeling the detailed psychological and behavioral dynamics of individuals and groups in a potentially hostile crowd. This crowd is modeled as actively engaging with security forces that protect a compound. The user can specify crowd attributes, choose diverse non-lethal weapons and rules-of-engagement, watch the event play out, and see the impacts on key outcomes of crowd attitudes and actions. We present our prototype simulation and initial experimental results. We then discuss our future plans for this research. A Cross-Discipline Framework to Enable Military Modernization Research Nathan Colvin (Old Dominion University) Abstract Abstract The field of military development touches upon disciplines from basic science to international policy. The complexity of modern military development creates additional layers of categorization in an environment of increasingly multidisciplinary research. Hierarchy in complex systems can improve communication and outcomes. In addition to hierarchy, coordination amongst diverse research disciplines requires frameworks of standardized communication. By expanding existing military hierarchies and combining them with existing frameworks of cross-discipline coordination, practitioners of military modernization can improve knowledge management, sense-making, experiment design, and build improved campaigns of learning. To create methods of improved communication which accelerates military development, this paper proposes a fifteen-level hierarchy of military modernization with levels from basic science to global international policy, which are described in terms of research, theory, method, tools, applications, datasets, and administrative information. Technical Session · Hybrid Simulation [In person] Hybrid Simulation Modeling and Methods Chair: Caroline C. Krejci (The University of Texas at Arlington) A RESTful Persistent DEVS-based Interaction Model for the Componentized WEAP and LEAP RESTful Frameworks Mostafa D. Fard and Hessam S. Sarjoughian (Arizona State University) Abstract Abstract Modeling the interactions between separate models contributes to building flexible simulation frameworks. The relationship between the energy and water models developed in WEAP and LEAP tools is specified using an Algorithmic Interaction Model. This Interaction Model was proposed and developed to integrate the componentized WEAP and LEAP RESTful frameworks. However, this interaction model does not separate domain-specific model specification from its execution protocol, a fundamental principle of the Knowledge Interchange Broker modeling approach for model composability and simulation interoperability. To overcome this limitation, the parallel DEVS for-malism is used to develop an Interaction Model for use with the DEVS-Suite simulator. The resulting DEVS Interaction Model (DEVS-IM) is supported with a RESTful framework and MongoDB for storing the interactions models for the water and energy models. The DEVS-IM can offer better support for model reusability, flexibility, and maintainability compared to the Algorithmic Interaction Model. Data Farming Output Analysis Using Explainable AI Niclas Feldkamp (Ilmenau University of Technology) Abstract Abstract Data Farming combines large-scale simulation experiments with high performance computing and sophisticated big data analysis methods. The portfolio of analysis methods for those large amounts of simulation data still yields potential to further development, and new methods emerge frequently. Especially the application of machine learning and artificial intelligence is difficult, since a lot of those methods are very good at approximating data for prediction, but less at actually revealing their underlying model of rules. To overcome the lack of comprehensibility of such black-box algorithms, a discipline called explainable artificial intelligence (XAI) has gained a lot of traction and has become very popular recently. This paper shows how to extend the portfolio of Data Farming output analysis methods using XAI. Commercial Case Study · Commercial Case Studies [In person] Machine Learning Applications Chair: Nathan Ivey (MOSIMTEC LLC) Explainable RL and Rule Reduction for Better Building Control Seongkwon Cho and Cheol-Soo Park (Seoul National University) Abstract Abstract Although it is widely acknowledged that reinforcement learning (RL) can be beneficially used for better building control, many RL-based control actions still remains unexplainable for daily practice of facility managers. This study reports the development of explainable RL for cooling control of an existing office building. A decision tree is applied to trained DQN agent and then a set of reduced-order control rules were suggested. Compared to the DQN agent, the rules are proved to be good-enough and the difference in energy savings between the two is marginal, resulting in 2.8%. Application of Daylighting Gaussian Process Emulator to Lighting Control of an Existing Building Hyeong-Gon Jo, Young-Sub Kim, and Cheol-Soo Park (Seoul National University, Building Simulation Lab) Abstract Abstract In this commercial case study, the authors present a data-driven daylight simulation model for a factory building. With the inputs of solar altitude, azimuth, cloud cover and measured illuminance at a single reference point, the Gaussian Process simulation model can accurately predict illuminances at multiple points in the building (average of CVRMSEs = 20.4%). It is shown that the data-driven simulation model can overcome disadvantages of the physics-based simulation model and be beneficially integrated to continuous dimming control for the existing building. Technical Session · Healthcare Applications [In person] Applications of Simulation in Healthcare Chair: Daniel Garcia de Vicuna (Public University of Navarre) Improving Input Parameter Estimation In Online Pandemic Simulation Daniel Garcia-Vicuña and Fermin Mallor (Public University of Navarre) Abstract Abstract Simulation models are suitable tools to represent the complexity and randomness of hospital systems. To be used as forecasting tools during pandemic waves, it is necessary an accurate estimation, by using real-time data, of all input parameters that define the patient pathway and length of stay in the hospital. We propose an estimation method based on an expectation-maximization algorithm that uses data from all patients admitted to the hospital to date. By simulating different pandemic waves, the performance of this method is compared with other two statistical estimators that use only complete data. Results collected to measure the accuracy in the parameters estimation and its influence in the forecasting of necessary resources to provide healthcare to pandemic patients show the better performance of the new estimation method. We also propose a new parameterization of the Gompertz growth model that eases the creation of patient arrival scenarios in the pandemic simulation. Physician Shift Scheduling to Improve Patient Safety and Patient Flow in the Emergency Department Vishnunarayan Girishan Prabhu and Kevin Taaffe (Clemson University) and Ronald Pirrallo, William Jackson, and Michael Ramsay (Prisma Health-Upstate) Abstract Abstract Emergency Departments (ED) act as the healthcare safety net for millions of people seeking medical care. To ensure smooth patient flow and efficient ED operations, it is crucial to maintain appropriate staffing levels and resource allocation. Although a well-recognized problem, ED crowding and patient safety concerns are still prevalent, with recent studies identifying ED as one of the leading departments prone to medical errors. This research focused on developing an optimization model to identify optimal physician staffing levels to minimize the combined cost of patient wait times, handoffs and physician shifts in the ED and testing in the simulation model. By generating two new shift schedules and testing them in the validated simulation model for a three-week period, we observed that patient time in the ED and handoffs can be reduced by as much as 27% and 26%, with a 1.4% increase in full-time equivalents compared to the current practices. Using Simulation Models as Early Strategic Decision Support in Health Care - Designing a Medical 3D Printing Center at Point of Care in Hospitals Philipp Url, Stefan Paal, Thomas Rosenzopf, Nikolaus Furian, Wolfgang Vorraber, and Siegfried Voessner (Graz University of Technology) and Martin Toedtling, Ulrike Zefferer, and Ute Schaefer (Medical University of Graz) Abstract Abstract We present a scalable simulation model of a medical 3D printing center at point of care, which is used for early strategic decision making concerning its configuration. The model is part of an ongoing research initiative where 3D printing technology is designed and assessed for the purpose of producing patient specific medical devices at point of care. The model captures uncertainties in medical 3D printing technology, resulting yield, maintenance, clinical process and organizational factors. Based on a set of defined scenarios, capturing current and future demand including ramp up phases, the model’s performance can then be evaluated and used for estimating associated requirements and operational performance. We present the model itself, its underlying simplifications and assumptions as well as input data and several scenarios. The results and their impact on strategic decisions will also be discussed. Technical Session · Data Science for Simulation [In person] DSS 3 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Requirements for Data-Driven Reliability Modeling and Simulation of Smart Manufacturing Systems Jonas Friederich, Sune Chung Jepsen, Sanja Lazarova-Molnar, and Torben Worm (University of Southern Denmark) Abstract Abstract Planning and deploying reliable Smart Manufacturing Systems (SMSs) is of increasing interest to both scholars and practitioners. A high system reliability goes hand in hand with reduced maintenance costs and enables optimized repairs and replacements. To leverage the full potential of SMSs and enable data-driven reliability assessment, data needs should be precisely defined. System integration is a key concept of the Industry 4.0 initiative and it can aid the extraction of the needed data. In this paper, we study the data requirements for a novel middleware for SMSs to enable and support data-driven reliability assessment. We present this middleware architecture and demonstrate its application through a case study, which is used to generate exemplary data that corresponds to the derived requirements. The data requirements and the middleware architecture can support researchers in developing novel data-driven reliability assessment methods, as well as assist practitioners in designing and deploying SMSs in companies. A Queueing Model for Evaluation of Video Analytics Job Scheduling Strategies for Smart Cities Mani Sharifi, Abdolreza Abhari, and Sharareh Taghipour (Ryerson University) Abstract Abstract This paper aims to find a proper methodology for evaluating job scheduling strategies for a data-intensive application such as video analytics applications used for smart cities. To compare two simulation methods with the analytical modeling for such evaluation, we proposed a queueing model for a system consisting of some heterogeneous edge processors and one cloud processor and compared it with a simple simulation approach. For building the analytical modeling, we first defined the system's characteristics and developed a queueing model for the system to calculate the edges and cloud processors' working times. To show the proposed queueing model's applicability to calculate the performance measures of different dispatching strategies, we compared it with simulating the same instance using the first-in-first-out dispatching strategy in two different dispatching scenarios. The results show that both methods' outputs are the same, but the proposed queueing model's computational time is significantly less than the simulation technique. Commercial Case Study · Commercial Case Studies [In person] Data-Driven Simulation Chair: Nathan Ivey (MOSIMTEC LLC) A Collaborative Test and Data-Driven Simulation Solution German Reyes, Michael Allen, and Dayana Cope (Engineering USA) Abstract Abstract A strategic design approach during simulation model development is an increasingly important requirement to support large-scale simulation development. In this case study, we present the importance of adopting two design approaches during the development of simulation models; namely, test-driven development, and data-driven modeling. In addition, in a multi-developer setting, the model architecture played an integral role in enabling collaboration by separating model components while ensuring appropriate interoperability between components and minimized redundancy. A modular design made it possible for developers to focus on specific behavioral aspects of the simulation model, test model behavior in a test environment and integrate all of the architectural pieces into the main master model. The model is automatically generated from data stored in an external database. This modeling approach has proven extremely useful in a setting where multiple developers worked on different model functionalities asynchronously. Simulating Backfill Operations for Underground Mining SAURABH PARAKH (MOSIMTEC) and Nicole Russell (Sibanye Stillwater) Abstract Abstract This case study focusses on the simulation model that was developed for Sibanye-Stillwater’s underground platinum mining operations in Nye, MT. Sibanye-Stillwater is a global mining company based in South Africa with Platinum Group Metals (PGM) and Gold mines in the Americas and Africa. One of their US mines in Montana, Stillwater Mining Company, owns and operates underground mines for PGM ore and a Concentrator plant. The process involves mining, transporting muck to surface, milling ore and backfilling the mined out cavities with tailings from milling. The circular dependency between mining and backfilling, along with variability for each task, makes it difficult to develop a realistic schedule, and efficiently deploy equipment and people. This presentation will review the simulation model used to help Sibanye understand how the bottleneck shifts every week, understand which resource is constraining underground mining from increasing production and understand where capital investments are needed in backfill operations. Commercial Case Study · Commercial Case Studies [In person] Logistics Chair: Amy Greer (MOSIMTEC, LLC) Simulation-Based Aircraft Spare Parts Optimization With Operational Objectives And Constraints Salena Hess, Dimitris Kostamis, Kriton Konstantinidis, and Konstantinos Varsos (Oliver Wyman) Abstract Abstract Despite numerous theoretical and practical advances in aircraft spare parts inventory management, airline operations are still being continuously challenged by elevated numbers of delays and cancellations, driven by part (un)availability. Most available methods consider a static approach, optimizing inventory reorder points based on service level (availability of part at the time of demand). To many carriers’ disappointment, these methods tend to disregard the dynamics of their operation, which includes among many the ability to defer maintenance, expedite inventory delivery, and borrow parts and fail to optimize for the ultimate goal of minimizing delays and cancellations. In this work, we introduce a simulation based inventory allocation optimization framework that accounts for the dynamics of airline operations and directly connects part availability to operational impact. We present a real-life case study of a major international carrier that has deployed this simulation-based optimization model. Case Study: Emulation Of Amazon Air Hub For Material Flow Control Validation Weilin Li, Hao Zhou, and Nanaware Ganesh (Amazon) Abstract Abstract Discrete Event Simulation (DES) has been extensively used in Amazon Worldwide Design Engineering as a decision support tool to analyze and experiment with different warehouse layout, process, control logic, staffing and scheduling decisions. However, standalone simulation requires re-implementing all the algorithms and configurations of the actual warehouse flow control system within the simulation software. This results in some gaps between the real and “simulated” system no matter how closely the logic is being replicated. On top of that, there are still things that won’t be captured by simulation such as network latency between hosts, databases and caches in production environment. This case study presents a high fidelity full system emulation developed for the Amazon KCVG Air Hub. It is used to debug, test, validate and optimize the flow control production software in a virtual environment. Planning for TR3 Production using Simio Patricia Buchanan (University of Washington) and Oscar Candanoza and Ivan Iturriaga (L3Harris) Abstract Abstract L3Harris seeks to evaluate their current production plan and capacity for the upcoming TR3 production launch. Specifically, the simulation team seeks to assess the resource plan, determine bottlenecks, and predict the program’s ability to meet the contract schedule. The simulation covers each of the 3 product lines within TR3 (ICP, PCD, and AMS). Technical Session · Using Simulation to Innovate [In person] Using Simulation and Digital Twins to Innovate - Are We Getting Smarter? Chair: Simon J. E. Taylor (Brunel University London) Using Simulation and Digital Twins to Innovate - Are We Getting Smarter? Simon J. E. Taylor (Brunel University London), Loo Hay Lee (National University of Singapore), Bjorn Johansson (Chalmers University of Technology), Sumin Jeon (Siemens Pte Ltd.), Peter Lenderman (D-SIMLAB Technologies Pte Ltd), and Guodong Shao (National Institute of Standards and Technology) Abstract Abstract Digital Twins have recently emerged as a major new area of innovation. Digital Twins are often found at the core of “smart” solutions that have also emerged as major areas of innovation. Modeling and Simulation (M&S) approaches create a model of a real-world system that is linked to data sources and is used to simulate and predict the behavior of its real-world counterpart. On the face of it Digital Twins and M&S appear to be similar, if not the same. Is this actually the case? Are the two fields are really separate or is Digital Twin research re-inventing the “M&S wheel”? To investigate these relationships, in this panel we will explore some contemporary innovations with Digital Twins and discuss whether or not Digital Twins is a contemporary “refresh” or “rebranding” of M&S or if there are there exciting new synergies? Technical Session · Model Uncertainty and Robust Simulation [In person] New Advances in Simulation Optimization and Estimation Chair: Sara Shashaani (North Carolina State University) Selection of the Most Probable Best under Input Uncertainty Kyoung-Kuk Kim and Taeho Kim (Korea Advanced Institute of Science and Technology) and Eunhye Song (Pennsylvania State University) Abstract Abstract We consider a ranking and selection problem whose configuration depends on a common input model estimated from finite real-world observations. To find a solution robust to estimation error in the input model, we introduce a new concept of robust optimality: the most probable best. Taking the Bayesian view, the most probable best is defined as the solution whose posterior probability of being the best is the largest given the real-world data. Focusing on the case where the posterior on the input model has finite support, we study the large deviation rate of the probability of incorrectly selecting the most probable best and formulate an optimal computing budget allocation (OCBA) scheme for this problem. We further approximate the OCBA problem to obtain a simple and interpretable budget allocation rule and propose sequential learning algorithms. A numerical study demonstrates good performances of the proposed algorithms. Generating Synthetic Populations Based on German Census Data Johannes Ponge, Malte Enbergs, Michael Schüngel, Bernd Hellingrath, André Karch, and Stephan Ludwig (University of Münster) Abstract Abstract Spatial agent-based simulations of infectious disease epidemics require a high-resolution regional population model. However, only aggregated demographic data is available for most geographic regions. Furthermore, the infectious disease application case can require the fusion of multiple data sources (e.g. census and public health statistics), inducing demand for a modular and extensible modeling approach. In this work we provide a novel sequential sample-free approach to generate synthetic baseline populations for agent-based simulations, combining synthetic reconstruction and combinatorial optimization. We applied the approach to generate a population model for the German state of North Rhine-Westphalia (17.5 million inhabitants) which yielded an average accuracy of around 98% per attribute. The resulting population model is publicly available and has been utilized in multiple simulation-based infectious disease case studies. We suggest that our research can pave the way for more geographically granular synthetic populations to be used in model-driven infectious disease epidemics prediction and prevention. Technical Session · Model Uncertainty and Robust Simulation [In person] Sampling and Estimation Chair: Sara Shashaani (North Carolina State University) Non-parametric Uncertainty Bias and Variance Estimation via Nested Bootstrapping and Influence Functions Kimia Vahdat and Sara Shashaani (North Carolina State University) Abstract Abstract In using limited datasets, modeling the uncertainty via nonparametric methods provides arguably more robust estimators of the unknown. We propose a novel nested bootstrap method that accounts for the uncertainty from various sources (input data, model, and estimation) more robustly. The nested bootstrap is particularly apt to the more nuanced conditional settings in constructing prediction rules but is easily generalizable. We utilize influence functions to estimate the bias due to input uncertainty and propose a procedure to correct the estimators' bias in a simulation optimization routine. Implementations in the context of feature selection via simulation optimization on two simulated datasets prove a significant improvement in robustness and accuracy. Efficient Computation for Stratified Splitting Peter Glynn (Stanford University) and Zeyu Zheng (University of California, Berkeley) Abstract Abstract Many applications in the areas of finance, environments, service systems, and statistical learning require the computation for a general function of the expected performances associated with one or many random objects. When complicated stochastic systems are involved, such computation needs to be done by stochastic simulation and the computational cost can be expensive. We design simulation algorithms that exploit the common random structure shared by all random objects, referred to as stratified splitting. We discuss the optimal simulation budget allocation problem for the proposed algorithms. A brief numerical experiment is conducted to illustrate the performance of the proposed algorithm with various budget allocation rules. Nonparametric Kullback-Liebler Divergence Estimation Using M-spacing Linyun He and Eunhye Song (Pennsylvania State University) Abstract Abstract Entropy of a random variable with unknown distribution function can be estimated nonparametrically by spacing methods when independent and identically distributed (i.i.d.) observations of the random variable are available. We extend the classical entropy estimator based on sample spacing to define an m-spacing estimator for the Kullback-Liebler (KL) divergence between two i.i.d. observations with unknown distribution functions, which can be applied to measure discrepancy between real-world system output and simulation output as well as between two simulators’ outputs. We show that the proposed estimator converges almost surely to the true KL divergence as the numbers of outputs collected from both systems increase under mild conditions and discuss the required choices for m and the simulation output sample size as functions of the real-world sample size. Additionally, we show Central Limit Theorems for the proposed estimator with appropriate scaling. Technical Session · Simulation in Industry 4.0 [In person] Simulation as Digital Twin in Industry 4.0 Framework I Chair: Dehghani Mohammad (Northeastern University) Digital Twin-based Services for Smart Production Logistics Erik Flores-García, Yongkuk Jeong, and Sichao Liu (KTH Royal Institute of Technology); Goo Young Kim (Sungkyunkwan University); and Magnus Wiktorsson and Lihui Wang (KTH Royal Institute of Technology) Abstract Abstract Digital Twin (DT)-based services including Industrial Internet of Things (IIoT) are essential for achieving the vision of Smart Production Logistics and enhancing manufacturing competitiveness. DT-based services combining IIoT provide real-time location of materials and optimization of resources for addressing mass customization and fluctuating market demand. However, literature applying IIoT and achieving DT-based services in smart production logistics (SPL) is scarce. Accordingly, the purpose of this study is to analyze the combined use of DT-based services and IIoT in SPL. We propose a framework combining DT-based services and IIoT for the real-time location and optimization of material handling. The study draws results from an SPL demonstrator based on a case in the automotive industry applying the proposed framework. The results show improvement in the delivery, makespan, and distance travelled during material handling. The study provides critical insight for managers responsible for improving the delivery of materials and information inside a factory. Simulation-Optimization of Digital Twin Mohammad Dehghanimohammadabadi (Northeastern University), Renee Thiesing (Simio LLC.), and Sahil Belsare (Northeastern University) Abstract Abstract With rapid advancements in Cyber-Physical manufacturing, Internet of Things, Simulation software, and Machine Learning algorithms, the applicability of Industry 4.0 is gaining momentum. The demand for real-time decision-making in the manufacturing industry has given significant attention to the field of Digital Twin (DT). The whole idea revolves around creating a digital counterpart of the physical system based on enterprise data to exploit the effects of numerous parameters and make informed decisions. Based on that, this paper proposes a simulation-optimization framework for the DT model of a Beverage Manufacturing Plant. A data-driven simulation model developed in Simio is integrated with Python to perform Multi-Objective optimization. The framework explores optimal solutions by simulating multiple scenarios by altering the availability of operators and dispatching/scheduling rules. The results show that simulation optimization can be integrated into the Digital-Twin models as part of real-time production planning and scheduling. Multi-Agent System Model for Dynamic Scheduling in Flexible Job Shops Akposeiyifa Joseph Ebufegha and Simon Li (University of Calgary) Abstract Abstract One of the hallmarks of industry 4.0 is the development of a smart manufacturing system (SMS). These are highly modular systems, with every physical resource being autonomous and capable of exchanging information with each other over an industrial network. The resources can self-organize to schedule job shop operations in real-time. The ability to schedule in real-time allows for better use of the flexibility in part processing operation sequences than with conventional manufacturing systems. This could potentially result in reduced order completion times and increased average machine utilization. However, it is difficult to investigate the benefits of such a system as they are expensive to build as such a simulation is necessary. This paper presents model for a dynamic scheduling in an SMS well as a multi-method model for simulating its operation. The paper also presents a preliminary investigation into the benefits of the proposed scheduling strategy. Commercial Case Study, Panel · Commercial Case Studies [In person] Panel: Challenges in Satisfying the Need and Promotion of Modeling & Simulation Workforce Chair: David T. Sturrock (Simio LLC) Panel: Challenges in Satisfying the Need and Promotion of Modeling & Simulation Workforce Hessam Sarjoughian (Arizona State University), Edward Yellig (Intel Corporation), James Nutaro (Oak Ridge National Laboratory), and Akshay Rajhans (MathWorks) Abstract Abstract The foundational role of simulation is to enable understanding, discovery, development, and operationsof dynamical systems. As such, modeling and simulation professionals intrinsically encounter problemsthat have non-trivial complexity and scale traits. Inevitably the systems they encounter must be modeled,simulated, and evaluated. This panel presents some challenges in attracting talented individuals to pursueeducation and professional careers, continuing education to satisfy the current and future knowledge andpractices while advancing basic and applied research and development in modeling and simulation. Tohighlight modeling and simulation workforce development, panelists share thoughts borne out of extensiveprofessional and academic experiences at Intel®, Oak Ridge National Lab, MathWorks®, and ASU. Technical Session · Simulation for Smart Cities [Virtual] Simulation for Smart Cities 2 Chair: Edward Y. Hua (MITRE Corporation) Simulating Urban Transition in Major Socio-economic Shocks Jiaqi Ge (University of Leeds) and Bernardo Furtado (Institute for Applied Economic Research) Abstract Abstract This paper presents an agent-based model that simulates the dynamic process of urban transition in major socio-economic shocks. The model features the coupled housing and labour markets in heterogeneous neighbourhoods in a city that is going through a major transition in the industrial structure. We consider a scenario where the old or traditional industry is gradually replaced by a new industry. Preliminary results from the scenario analysis are presented. This paper makes several contributions: 1) We introduce feedback links between the housing and the labour market via connecting the choices of individuals and businesses and the evolution of neighbourhoods; 2) We introduce links between the local workers, businesses and the overall industrial structure in a complex urban system; 3) Finally, we propose a complex system approach to major socio-economic transitions in an urban system. A Workflow for Data-Driven Fault Detection and Diagnosis in Buildings Joseph Boi-Ukeme and Gabriel Wainer (Carleton University) Abstract Abstract The evolution of smart buildings has been driven by technological advancement in building control systems, which made building systems more fragile and prone to faults. Buildings now generate an enormous amount of data, and the collection and analysis of such data is useful for detecting faults. However, fault detection approaches are not optimal in buildings due to several challenges. Data for fault detection is not readily available because of poor data collection practices and often when data is available, the data quality is inadequate to have useful models for fault detection and diagnosis (FDD). We propose a workflow for data-driven fault detection and diagnosis to deal with some of these challenges. The workflow incorporates a data collection framework and recommends the best data-driven modeling practices to improve data quality and model performance Technical Session [Virtual] Patient Centered Modeling and Clinical Trials Chair: Barry L. Nelson (Northwestern University) Sensitivity Analysis in Clinical Trial Simulation at SAS Institute Wendy Xi Jiang, Bahar Biller, and Jim Box (SAS Institute, Inc) and Barry L. Nelson (Northwestern University) Abstract Abstract Clinical trial enrollment is expensive, important, and subject to many uncertainties. Simulation captures these uncertainties, so SAS Institute created the Clinical Trial Enrollment Simulator (CTrES) as a tool specifically for enrollment planning. However, simulation provides no mathematical expression from which to extract sensitivity measures that are critical for problem diagnosis and management. This paper describes sensitivity analysis technology created for CTrES requiring only the output data obtained from simulation of the base scenario, and demonstrates it on a realistic enrollment planning problem for the United States. BBECT: Bandit Based Ethical Clinical Trials Ramprasath L and Mohammed Shahid Abdulla (Indian Institute of Management, Kozhikode) Abstract Abstract The aim of an Ethico-Optimal clinical trial is to randomly allocate the new drug (ND) and the standard of care (SOC) to patients in the sample, but with a greater fraction being administered ND if doing so is statistically justified. Such an adaptation is not possible in static trials, in which approximately half the patients would receive ND and the remaining patients SOC, despite evidence within the trial that ND is efficacious. We adapt a canonical stochastic multi-armed bandit algorithm named UCB1 to a clinical trials setting and analyze the resulting type-2 error ($\beta$), as also the minimum sample size required by such a trial for a certain $\beta$ level. We also present simulations to establish that the ethical properties of such a trial are higher, both to verify our analysis and demonstrate an empirical advantage when compared to an existing method. Supporting Efficient Assignment of Medical Resources in Cancer Treatments with Simulation-optimization Leandro do C. Martins, Juliana Castaneda, and Angel A. Juan (Universitat Oberta de Catalunya); Barry B. Barrios (Stanford University); Abtin Tondar and Laura Calvet (Universitat Oberta de Catalunya); and Jose Luis Sanchez-Garcia (Autonomous University of Barcelona) Abstract Abstract When scheduling multi-period medical treatments for patients with cancer, medical committees have to consider a large amount of data, variables, sanitary and budget constraints, as well as probabilistic elements. In many hospitals worldwide, specialists regularly decide the optimal schedule of treatments to be assigned to patients by considering multiple periods and the number of available resources. Hence, decisions have to be made upon the priority of each patient, available treatments, their expected effects, the proper order and intensity in which they should be applied. Consequently, medical experts have to assess many possible combinations and, eventually, choose the one that maximizes the survival chances or expected life quality of patients. To support this complex decision-making process, this paper introduces a novel methodology that combines a biased-randomized heuristic with simulation, to return `elite' alternatives to experts. A simplified yet illustrative case study shows the main concepts and potential of the proposed approach. Panel · Healthcare Applications [Virtual] Panel on Simulation Modeling for Covid-19 Chair: Christine Currie (University of Southampton) Panel on Simulation Modeling for Covid-19 Dionne M. Aleman (University of Toronto), Anastasia Anagnostou (Brunel University London), Christine Currie (University of Southampton), John W. Fowler and Esma S. Gel (Arizona State University), and Alexander R. Rutherford (Simon Fraser University) Abstract Abstract This is a panel paper which discusses the use of simulation modeling in mitigating the effects of the Covid-19 pandemic. We have gathered a group of expert modelers from around the world who have worked on healthcare simulation projects associated with the pandemic and the paper provides their answers to an initial set of questions. These serve to provide a description of the modeling work that has taken place already and to make suggestions for future directions both in modeling Covid-19 and preparing the world for future healthcare emergencies. Technical Session · Modeling Methodology [Virtual] DEVS Chair: Hessam Sarjoughian (Arizona State University) Specifying and Executing the Combination of Timed Finite State Automata en Causal-Block Diagrams by mapping onto DEVS Hans Vangheluwe, Randy Paredis, and Joachim Denil (University of Antwerp, Flanders Make) Abstract Abstract Multi-Paradigm Modelling (MPM) advocates to explicitly model every part and aspect of a system, at the most appropriate level(s) of abstraction, using most appropriate formalism(s). We show, starting from a representative Personalized Rapid Transporation rail car example, how MPM naturally leads to the need to combine formalisms. To give these formalisms a precise semantics and to make them executable, we choose to map them all onto behaviourally equivalent (modulo some level of approximation in the case of continuous formalisms) Discrete EVent system Specification (DEVS) models. Our focus and main contribution is the principled combination TFSA>(CBD+StEL) of Timed Finite State Automata (TFSA) and Causal Block Diagrams (CBDs) using a State Event Location "glue'' formalism StEL, and their mapping onto DEVS. The result of our principled workflow, explicitly modelled in a Formalism Transformation Graph + Process Model (FTG+PM) is an accurate and efficient simulator. This is demonstrated on the rail car case. DEVS Markov Modeling and Simulation of Activity-Based Models for MBSE Application Abdurrahman Alshareef (King Saud University) and Chungman Seo, Anthony Kim, and Bernard P. Zeigler (RTSync Corp.) Abstract Abstract DEVS has been proposed as the basic modeling and simulation framework for Model-Based System Engineering (MBSE) methodology that supports the critical stages in top down design of complex systems. Here we propose a novel Discrete Event System Specification (DEVS) Markov simulation for Activity-Based Models to provide a means for experiment design to examine flows in activities with behavioral elements. The approach allows for comparing different flows, among other benefits, not just in terms of different parameterizations of their probabilistic properties but also with the inherent adjustment of their inner behavioral specification in a seamless manner. We show how these adjustments of the models would be rather costly and harder to maintain otherwise. Complexity Analysis on Flattened PDEVS Simulations Guillermo G. Trabes (Carleton university, Universidad Nacional de San Luis); Veronica Gil-Costa (Universidad Nacional de San Luis, CCT CONICET San Luis); and Gabriel A. Wainer (Carleton University) Abstract Abstract Discrete Event Systems Specification (DEVS) is a well-known formalism to develop models using the discrete event approach. One advantage of DEVS is a clear separation between the modeling and simulation activities. The user only needs to develop models and general algorithms exist in the literature to execute simulations. DEVS was enhanced to handle better simultaneous events in the PDEVS formalism. To execute PDEVS simulations, a well-know and widely accepted algorithm was introduced: the PDEVS simulation protocol. However, since its creation, the protocol has evolved, and several versions have been proposed and implemented. In this work we propose and analytical approach to fully define and analyze this protocol. We divide the protocol into steps and sub-steps and for each of them we present a computer complexity analysis based on two key factors of the protocol’s execution: the messages the components interchange and the computations the components execute. Technical Session · Agent-based Simulation [Virtual] ABS Modeling Methodologies Chair: Bhakti Stephan Onggo (University of Southampton) A Generalized Network Generation Approach for Agent-based Models Kristina Heß (NORDAKADEMIE), Oliver Reinhardt (University of Rostock), Jan Himmelspach (NORDAKADEMIE), and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract For their initialization, many agent-based models require a population which corresponds in its essential characteristics to the examined real population. It should reflect the real distribution of attributes of interest, e.g., age, gender, or income, as well as the social network between the agents. Since a disaggregated data set with all required information is rarely available, a synthetic population must be created. Methods that assign realistic attribute values to agents are well studied in the literature. In contrast, the generation of plausible social networks has been less extensively researched, but several comprehensive adhoc models have been developed. The focus of this work is to introduce a reusable, generalized approach for the generation of synthetic social networks. Symbolic regression is used to automatically train generation rules based on a network sample, instead of having to define rules a priori. Manually specified constraints are taken into account to avoid implausible relationships. CSonNet: An Agent-Based Modeling Software System for Discrete Time Simulation Best Contributed Applied Paper - Finalist Chris J. Kuhlman, Joshua Priest, Aparna Kishore, Lucas Machi, Dustin Machi, and S. S. Ravi (University of Virginia) Abstract Abstract Contagion dynamics on networks are used to study many problems, including disease and virus epidemics, incarceration, obesity, protests and rebellions, needle sharing in drug use, and hurricane and other natural disaster events. Simulators to study these problems range from smaller-scale serial codes to large-scale distributed systems. In recent years, Python-based simulation systems have been built. In this work, we describe a new Python-based agent-based simulator called CSonNet. It differs from codes such as Epidemics on Networks in that it performs discrete time simulations based on the graph dynamical systems formalism. CSonNet is a parallel code; it implements concurrency through an embarrassingly parallel approach of running multiple simulation instances on a user-specified number of forked processes. It has a modeling framework whereby agent models are composed using a set of pre-defined state transition rules. We provide strong-scaling performance results and case studies to illustrate its features. An Uncertainty Quantification Approach for Agent-Based Modeling of Human Behavior in Networked Anagram Games Zhihao Hu and Xinwei Deng (Virginia Tech) and Chris Kuhlman (University of Virginia) Abstract Abstract In a group anagram game, players are provided letters to form as many words as possible. They can also request letters from their neighbors and reply to letter requests. Currently, a single agent-based model is produced from all experimental data, with dependence only on number of neighbors. In this work, we build, exercise, and evaluate enhanced agent behavior models for networked group anagram games under an uncertainty quantification framework. Specifically, we cluster game data for players based on their skill levels (forming words, requesting letters, and replying to requests), perform multinomial logistic regression for transition probabilities, and quantify uncertainty within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. We conduct simulations of ego agents with neighbors to demonstrate the efficacy of our proposed methods. Technical Session · Agent-based Simulation [Virtual] Simulations of Human Movements Chair: Chris Kuhlman (University of Virginia) Agent-based Simulation of Aircraft Boarding Strategies Considering Elderly Passengers Denise Ferrari and Bruna Fabrin (Instituto Tecnologico de Aeronautica) Abstract Abstract Boarding is an important process for airline companies, with direct impact in operation efficiency and customer satisfaction. In some countries, priority boarding is required by law; elders, pregnant women, people with infants or disabilities have the right to embark first, regardless of ticket class, loyalty program or boarding group. The present work examines the effect of the adoption of priority boarding policy on total boarding time, along with other factors that are known to affect the efficiency of the boarding process, using an agent-based simulation model that represent the boarding process in a Boeing 767-300 aircraft. The simulated results indicate that the boarding process is improved by adopting priority boarding, which is beneficial not only to operational efficiency, but also has the potential of enhancing customer experience, thus suggesting that priority boarding should be a highly encouraged practice among airline companies. Measuring Proximity of Individuals during Aircraft Boarding Process with Elderly Passengers through Agent-based Simulation Denise Ferrari and Bruna Fabrin (Instituto Tecnologico de Aeronautica) Abstract Abstract The COVID-19 pandemic imposed severe restrictions to the mobility of people worldwide, bringing as a consequent great losses to the air transportation industry. The evaluation of biosafety risk has never been more critical for the recovery of transport and economic activity. Elderly passengers constitute a specially vulnerable population to infectious diseases. The main objective of the present study was to investigate the factors that lead to increased proximity of individuals during the boarding process when elderly passengers are present. In order to do so, an agent-based simulation model was built to represent the boarding process in a Boeing 737 aircraft. The simulated results indicate that elderly passengers are less exposed to contact with other individuals during boarding process when social distancing actions are taken, and when they are the last passengers to come onboard, although this strategy may increase total boarding time. A Model-based Analysis of Evacuation Strategies in Hospital Emergency Departments Boyi Su and Jaeyoung Kwak (Nanyang Technological University); Ahmad Reza Pourghaderi (Singapore Health Services); Michael H. Lees (Amsterdam UMC, University of Amsterdam); Kenneth B. K. Tan, Shin Yi Loo, Ivan S.Y. Chua, and Joy L. J. Quah (Singapore General Hospital); Wentong Cai (Nanyang Technological University); and Marcus E. H. Ong (Duke-NUS Medical School) Abstract Abstract Evacuation planning for emergency incidents is an essential preparedness for Emergency Departments (ED) which normally contains patients with severe illness and limited mobility. However, the preparedness can be challenging due to a lack of empirical data and difficulties conducting physical drills. We propose an agent-based model to simulate the evacuation process in the EDs containing medical staff, rescuers, visitors and various types of patients. In a case study, we apply the model to a peak hour scenario of the ED of the largest hospital in Singapore. Two rescue strategies with different behavior sequences of medical staff as suggested by the practitioners are evaluated. The simulation results show that prioritizing preparation of all the patients generates less total evacuation time but leads to fewer evacuated cases in the first 20 minutes and more serious congestion compared to one-by-one transfer of individual patients. Technical Session · Healthcare Applications [Virtual] Operations Management and Patient Flow Chair: Vijay Gehlot (Villanova University) Toolkit for Healthcare Professionals: A Colored Petri Nets Based Approach for Modeling and Simulation of Healthcare Workflows Vijay Gehlot, Jake Robinson, and Manisha Tanwar (Villanova University); Elliot B. Sloane (Villanova University; Foundation for Living, Wellness, and Health); and Nilmini Wickramasinghe (Swinburne University of Technology, Epworth HealthCare) Abstract Abstract In 2017, Academic Emergency Medicine convened a consensus conference on Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes to assess the impact of simulation on various aspects of healthcare delivery. One focus area was the role that computer modeling and simulation can and should play in the research and development of emergency care delivery systems. In this paper, we illustrate the use and application of Colored Petri Nets (CPNs) for modeling and simulating healthcare workflow processes. Specifically, we detail our approach by modeling patient flow to an operating room. The model accounts for various resources and their utilization. We use hierarchical Colored Petri Nets with modules and the associated graphical integrated development environment called CPN Tools for creating and simulating the model. The hierarchy and module concepts of CPNs allow the modeling of large and complex systems in an incremental and top-down manner. A High-fidelity, Machine-learning Enhanced Queueing Network Simulation Model for Hospital Ultrasound Operations Yihan Pan, Zhenghang Xu, and Jin Guang (School of Data Science, The Chinese University of Hong Kong, Shenzhen); Jingjing Sun (Department of Information Management and Information System, Shanghai University of Finance and Economics); Chengwenjian Wang and Xuanming Zhang (School of Mathematical Science, Fudan University); Xinyun Chen and Jiangang Dai (School of Data Science, The Chinese University of Hong Kong, Shenzhen); Yichuan Ding (Desautels Faculty of Management, McGill University); Pengyi Shi (Krannert School of Management, Purdue University); and Hongxin Pan, Kai Yang, and Song Wu (Luohu Hospital System) Abstract Abstract We collaborate with a large teaching hospital in Shenzhen, China and build a high-fidelity simulation model for its ultrasound center to predict key performance metrics, including the distributions of queue length, waiting time and sojourn time, with high accuracy. The key challenge to build an accurate simulation model is to understand the complicated patient routing at the ultrasound center. To address the issue, we propose a novel two-level routing component to the queueing network model and use machine learning tools to calibrate the routing components from data. Our empirical results show that the calibrated model is of high fidelity and yields accurate prediction results for the performance metrics. Technical Session · Aviation Modeling and Analysis [Virtual] Airport Operations Chair: John Shortle (George Mason University) On Static vs Dynamic (Switching Of) Operational Policies in Aircraft Turnaround Team Allocation and Management Sudipta Saha and Maurizio Tomasella (University of Edinburgh), Giovanni Cattaneo and Andrea Matta (Politecnico di Milano), and Silvia Padron (TBS Business School) Abstract Abstract Aircraft turnaround operations represent the fulcrum of airport operations. They include all services to be provided to an aircraft between two consecutive flights. These services are executed by human operators, often organised in teams, who employ some related equipment and vehicles (e.g. conveyor belts, trolleys and tugs for baggage loading/unloading and transportation). In this paper, we focus on the real-time management of turnaround operations, and assess the relative merits and limitations of so-called dispatching rules that originate from the manufacturing literature. More precisely, we focus on the real-time allocation, on the day of operation, of teams of ground handling operators to aircraft turnarounds. This is pursued from the viewpoint of third-party service providers. We employ simulation, in conjunction with deep reinforcement learning, and work on the case of a real airport and the entirety of its turnaround operations involving multiple service providers. Enhanced Operational Management of Airport Ground Support Equipment for Better Aircraft Turnaround Performance Siddhanta Saggar and Maurizio Tomasella (University of Edinburgh) and Giovanni Cattaneo and Andrea Matta (Politecnico di Milano) Abstract Abstract Within the context of airport operations, this work focuses on enhancing the planning and real-time allocation of certain resources that are used to turn around an aircraft between two consecutive flights. This sort of operations takes place in the area of an airport that is called its apron. At peak times in particular, and when resource capacity is really tight, apron operations tend to be affected by either unavailability or late arrival of certain assets at the stand. The key element of this paper is the proposal of a new resource booking system, which operates in real-time, and deals with the related uncertainties. The booking mechanism aims to allow the airlines to book in advance certain resources, in particular ground support equipment. Our choice of a real case study will help us to assess the likely benefits, pros and cons of this system. Invited Paper, Contributed Paper, Technical Session · Aviation Modeling and Analysis [Virtual] Advanced Technologies in Air Transportation Chair: John Shortle (George Mason University) Predicting Runway Configuration Transition Timings using Machine Learning Methods Max En Cheng Lau, Andy Jun Guang Lam, and Sameer Alam (Nanyang Technological University) Abstract Abstract Runway configuration change is one of the major factors effecting runway capacity. The transition-time required to change from one runway configuration to another is a key concern in optimising runway configuration. This study formulates prediction of runway transition timings as machine learning regression problem by using an ensemble of regressors which provides continuous estimates using flight trajectories, meteorological data, current and past runway configurations and active STAR routes. The data consolidation and feature engineering convert heterogeneous sources of data and includes a clustering-based prediction of arrival runways on with an 89.9% validity rate. The proposed model is applied on PHL airport with 4 runways and 23 possible configurations. The 6 major runways configuration changes modelled using Random Forest Regressor achieved R^2 scores of at least 0.8 and median RMSE of 18.8 minutes, highlighting the predictive power of Machine Learning approach, for informed decision-making in runway configuration change management. Technology Adoption in Air Traffic Management: A Combination of Agent-Based Modeling with Behavioral Economics Bill Roungas (KTH Royal Institute of Technology); Miguel Baena, Oliva Garcia-Cantu Ros, Ruben Alcolea, and Ricardo Herranz (Nommon Solutions and Technologies); and Jayanth Raghothama (KTH Royal Institute of Technology) Abstract Abstract The European Air Traffic Management (ATM) system is responsible for the safe and timely transportation of more than a billion passengers annually. It is a system that depends heavily on technology and is expected to stay on top of the technological advancements and be an early adopter of technologies. Nevertheless, technological change in ATM has historically developed at a slow pace. In this paper, an agent-based model (ABM) of the ATM technology deployment cycle is proposed. The proposed ABM is part of a larger project, which intends to recommend new policy measures for overcoming any barriers associated with technology adoption in ATM. It is a novel and one of the first approaches aiming at the adoption of technology in ATM that combines the organizational point of view, i.e. stakeholders’ level, the focus on policy testing and the inclusion of behavioral economics aspects. Capacity Analysis for a Flow Corridor with Dynamic Wake Separation Azin Zare Noghabi (Facebook, George Mason University) and John Shortle (George Mason University) Abstract Abstract This paper gives a simulation framework to investigate the capacity benefits of employing a dynamic wake separation policy in a single lane flow corridor. The flow corridor concept is a proposed route structure in en-route airspace to increase capacity in response to growing demand. Aircraft flying in a flow corridor must be safely separated to avoid collisions as well as wake vortex encounters. Wake vortices are circular patterns of rotating air left behind a wing as it generates lift and can impose a hazard to other aircraft. This research considers a dynamic wake separation concept that uses information about weight and airspeed of aircraft and meteorological conditions to determine the minimum required wake separation between aircraft in a flow corridor. This is in contrast to a static policy that uses a fixed separation minimum based on conservative assumptions. The simulation results demonstrate capacity benefits compared to static separation standards. Vendor · Vendor [Virtual] AutoMod Virtual User Group Technical Session · Healthcare Applications [Virtual] Simulating Disease Progression Chair: Varun Madhavan (IIT Kharagpur) A Simulation Model of Breast Cancer Incidence, Progression, Diagnosis and Survival in India Saumya Gupta, Chandan Mittal, Soham Das, and Shaurya Shriyam (Indian Institute of Technology Delhi); Atul Batra (All India Institute of Medical Sciences New Delhi); and Varun Ramamohan (Indian Institute of Technology Delhi) Abstract Abstract For resource-constrained health systems, it becomes important to evaluate the cost-effectiveness of a breast cancer (BC) screening program prior to its real-world implementation. In this paper, we provide an overview of a new simulation model of BC incidence, progression, and diagnosis developed to maximize the cost-effectiveness of a BC screening program. We describe the development of the modules for BC incidence, tumor growth and diagnosis, and survival estimation of diagnosed patients. Incidence of BC is based on published risk factors and publicly available national cancer registry reports. Survival curves for diagnosed patients based on specific characteristics such as age group and stage of diagnosis are generated using a single published survival curve for the entire population and hazard ratios for each characteristic. Our simulation development approach can provide a model development template for researchers working to develop BC simulation models in other settings with data availability similar to our case. Comparing Data Collection Strategies Via Input Uncertainty When Simulating Testing Policies Using Viral Load Profiles Drupad Parmar and Lucy E. Morgan (Lancaster University), Eva D. Regnier and Susan M. Sanchez (Naval Postgraduate School), and Andrew C. Titman (Lancaster University) Abstract Abstract Temporal profiles of viral load have individual variability and are used to determine whether individuals are infected based on some limit of detection. Modelling and simulating viral load profiles allows for the performance of testing policies to be estimated, however viral load behaviour can be very uncertain. We describe an approach for studying the input uncertainty passed to simulated policy performance when viral load profiles are estimated from different data collection strategies. Our example shows that comparing the strategies solely based on input uncertainty is inappropriate due to the differences in confidence interval coverage caused by negatively biased simulation outputs. A Discrete Simulation Optimization Approach Towards Calibration of an Agent-Based Simulation Model of Hepatitis C Virus Transmission Soham Das (Indian Institute of Technology Delhi), Navonil Mustafee (University of Exeter), and Varun Ramamohan (Indian Institute of Technology Delhi) Abstract Abstract This study demonstrates the implementation of the stochastic ruler discrete simulation optimization method for calibrating an agent-based model (ABM) developed to simulate hepatitis C virus (HCV) transmission. The ABM simulates HCV transmission between agents interacting in multiple environments relevant for HCV transmission in the Indian context. Key outcomes of the ABM are HCV and injecting drug user (IDU) prevalences among the simulated cohort. Certain input parameters of the ABM need to be calibrated so that simulation outcomes attain values as close as possible to real-world HCV and IDU prevalences. We conceptualize the calibration process as a discrete simulation optimization problem by discretizing the calibration parameter ranges, defining an appropriate objective function, and then applying the stochastic ruler random search method to solve this problem. We also present a method that exploits the monotonic relationship between the simulation outcomes and calibration parameters to yield improved calibration solutions with lesser computational effort. Technical Session · Scientific Applications [Virtual] Scientific Applications Chair: Rafael Mayo-García (CIEMAT); Esteban Mocskos (University of Buenos Aires (AR), CSC-CONICET) A Novel Cloud-based Framework for Standardized Simulations in the Latin American Giant Observatory (LAGO) Antonio Juan Rubio-Montero, Raúl Pagán-Muñoz, Rafael Mayo-García, and Alfonso Pardo-Diaz (CIEMAT); Iván Sildelnik (CONICET); and Hernán Asorey (CNEA, Instituto de Tecnologías en Detección y Astropartículas) Abstract Abstract LAGO, the Latin American Giant Observatory, is an extended cosmic ray observatory, consisting of a wide network of water Cherenkov detectors located in 10 countries. With different altitudes and geomagnetic rigidity cutoffs, their geographic distribution, combined with the new electronics for control, atmospheric sensing and data acquisition, allows the realization of diverse astrophysics studies at a regional scale. It is an observatory designed, built and operated by the LAGO Collaboration, a non-centralized alliance of 30 institutions from 11 countries. While LAGO has access to different computational frameworks, it lacks standardized computational mechanisms to fully grasp its cooperative approach. The European Commission is fostering initiatives aligned to LAGO objectives, especially to enable Open Science and its long-term sustainability. This work introduces the adaptation of LAGO to this paradigm within the EOSC-Synergy project, focusing on the simulations of the expected astrophysical signatures at detectors deployed at the LAGO sites around the World. Non-Equilibrium Green Functions Approach to Study Transport Through a-Si:H/c-Si Interfaces Alessandro Pecchia (Consiglio Nazionale delle Ricerche (CNR)); Francesco Buonocore, Massimo Celino, and Simone Giusepponi (ENEA); Edoardo Di Napoli and Sebastian Achilles (Jülich Supercomputing Centre); and Pablo Luis Garcia-Muller and Rafael Mayo-Garcia (CIEMAT) Abstract Abstract The microscopic mechanisms of transport and recombination mechanisms in silicon heterojunction solar cells are still poorly understood. The purpose of the present work is to understand the transport mechanisms underlying photovoltaic devices based on silicon heterojunction technology by simulating at atomistic resolution amorphous-crystalline heterointerfaces. We have used classic molecular dynamics simulations to build up realistic c-Si/a-Si:H/c-Si interface at different temperatures. The ab initio characterization has been executed on selected configurations to monitor the electronic properties of the c-Si/a-Si:H/c-Si interface. The electron transmission is calculated at different temperatures based on the non-equilibrium Green functions approach and its behavior is correlated to the evolution of the intragap states. The whole outlined process will allow designing more efficient silicon solar cells belonging to the silicon heterojunction technology. Comparing the Effect of Code Optimizations on Simulation Runtime Across Synchronous Cellular Automata Models of HIV Junjiang Li (Miami University), Till Köster (University of Rostock), and Philippe J. Giabbanelli (Miami University) Abstract Abstract Models developed by domain experts occasionally struggle to achieve a sufficient execution speed. Improving performances requires expertise in parallel and distributed simulations, hardware, or time to profile performances to identify bottlenecks. However, end-users in biological simulations of the Human Immunodeficiency Virus (HIV) have repeatedly demonstrated that these resources are either not available or not sought, resulting in models that are developed through user-friendly languages and platforms, then used on workstations. This situation becomes problematic when performances cannot cope with the salient characteristics of the phenomenon that is modeled, as is the case with cellular automata (CA) models of HIV. In this paper, we optimize the Python code of CA models of HIV to scale the number of cells handled by a simulation on a workstation commonly available to end-users. We demonstrate this scalability via five HIV CA models and compare these results to assess how modeling choices can impact runtime. Commercial Case Study · Commercial Case Studies [Virtual] Manufacturing Optimization Chair: David T. Sturrock (Simio LLC) Virtual Engineering to Design Advanced Manufacturing Systems Emile Glorieux and Terie Purse (MTC) Abstract Abstract Designing advanced manufacturing systems (AMS) is a complex task needing to consider the re-configurability requirements, how these should be controlled to protect the system's productivity and the consistency of a guaranteed product quality. Virtual Engineering (VE) multi-disciplinary simulation provides insights to explore design options and evaluate different scenarios whilst considering multiple criteria. This paper presents how VE can be used in the design of AMS to meet the objectives for productivity, re-configurability and process quality. Commercial case studies relevant to various industries are presented, including personal care formulation, pharmaceuticals, material handling and packing. The main contribution is the demonstration of how system design and VE work together to effectively support the design challenges. The steps and key considerations are presented. Application of Simulation Optimization to Minimize Downtime Lewis Bobbermen and Tao Vink (Polymathian) Abstract Abstract In this case study, simulation optimization is used to minimize the inventory required to meet a target level of unplanned outage time (or conversely, minimize the cost of downtime for a given inventory level). A large inventory of conveyor belts is maintained as insurance stock to minimize the cost of downtime associated with a conveyor belt failures. Hundreds of conveyors are used across multiple sites, each with a different probability and cost of failure. A MIP formulation is used to optimize inventory holdings for each belt material type. Conveyor belt failures are then simulated over a long period of operations to provide an assessment of the likely range of performance for the optimized conveyor belt inventory and comparison with the existing inventory. The study provides a relatively simple application of simulation optimization that has the potential to deliver significant value to operations with large spare parts inventory holdings. Process Wind Tunnel for Improving Business Process Sudhendu Rai (American International Group (AIG)) Abstract Abstract In this talk, we will introduce a simulation-based process improvement framework and methodology called the Process Wind Tunnel. We will describe this framework and introduce the underlying technologies namely process mapping and data collection, data wrangling, exploratory data analysis and visualization, process mining, discrete-event simulation optimization and solution implementation. We will discuss how Process Wind Tunnel framework was utilized to improve a critical business process namely, the post-execution trade settlement process. The work builds upon and generalizes the Lean Document Production solution (2008 Edelman finalist) for optimizing printshops to more general and complex business processes found within the insurance and financial services industry. Commercial Case Study · Commercial Case Studies [Virtual] Haulage Operations Chair: Devdatta Deo (Simio LLC) Haulage Simulation with Complex Allocation and Tactical Stockpiling Kevin Nguyen and Solene Hegarty-Cremer (Polymathian) Abstract Abstract Haulage is typically the largest single cost in open pit mining operations. Small improvements in haulage operations for large open pit mines can translate to significant returns. Delays due to queuing at shovel locations, crushers and dump locations can add up to a significant proportion of the average truck cycle, presenting an opportunity for savings if the causes for queuing can be identified and addressed. Simulation of haulage operations is an essential tool for both understanding the causes for operational delays and testing potential solutions. Simulation of haulage operations requires representation of fundamental truck cycle activities, ancillary activities and movement interactions on complex networks. It also requires simulation of complex decision making if there are many alternative source locations and destinations for different ore types and waste. This presents a challenge for developing a good simulation model representation, but also provides an environment where simulation analysis can deliver significant value. Simulation of Carrapateena Underground Haulage Operations Colin Eustace and Lewis Bobbermen (Polymathian) and Daniel Hronsky (OZ Minerals) Abstract Abstract Truck haulage for underground mining typically operates on single width roadways, often with limited opportunities to pass returning empty and loaded trucks. Vehicle interactions and associated waiting delays at passing bays and turnouts can accumulate to be a substantial proportion of overall average truck cycle times. This limits the total capacity of the haulage system. Incidence of vehicle interactions can also change significantly with haulage throughput, truck fleet size and changes in haulage routes over time. This case study describes the application of simulation of the haulage operations at the Carrapateena underground mine in South Australia to evaluate the future capability of the haulage system. The representation of haulage operations includes detailed truck maneuvers to manage campaigning and passing of multiple trucks and management of ore and development waste stockpiles to both feed the downstream materials handling system and allow the upstream mine production and development to continue operating. Assessing the Hinterland of Maritime Ports in the European Northern Range Ralf Elbert and Michael Gleser (Technische Universität Darmstadt) and Frank Van der Laan (Port of Rotterdam) Abstract Abstract The hinterland region in the state of North Rhine-Westphalia (NRW) in western Germany is in close proximity to several ports (e.g., Rotterdam, Antwerp, Hamburg, Wilhelmshaven) allowing for inter port competition through hinterland connectivity. To assess the impact of additional hinterland connections a simulation model has been built, mapping the hinterland connectivity to the respective ports. While calibration of the model can and should be further improved, the impact of changes in the hinterland connectivity can be experimentally evaluated by adding or modifying the connectivity. The model can serve as a first estimate of new potential connections on a managerial level, giving hints for further qualitative investigations of their impact. Technical Session · Simulation for Smart Cities [Virtual] Simulation for Smart Cities Chair: Mina Sartipi (University of Tennessee at Chattanooga); Edward Y. Hua (MITRE Corporation); Sanja Lazarova-Molnar (University of Southern Denmark, SDU) Do People Favor Personal Data Markets in a Surveillance Society? Ranjan Pal (University of Michigan), Yixuan Wang (Carnegie Mellon University), Charles Light and Yifan Dong (University of Michigan), Pradipta Ghosh (Facebook), Harshith Nagubandi (University of California San Diego), Bodhibrata Nag (Indian Institute of Management Calcutta), Mingyan Liu (University of Michigan), Leana Golubchik (University of Southern California), Swades De (Indian Institute of Technology Delhi), and Tathagata Bandyopadhyay (Indian Institute of Management Ahmedabad) Abstract Abstract We investigate and rationalize how individuals in a surveilled developing economic society value a human-centric data economy (HCDE). We first design and conduct a non-online pilot field experiment on approximately 22500 human subjects across India from 2014-2019, and collect data reflecting the impact of monetary incentives on these subjects to voluntarily trade their (personal) data in the digital surveillance age. Consequently, we study how various degrees of incentive influence subject preferences - both, when they are, or are not well-informed about the commercial malpractices their personal data might be subjected to. We analyze and rationalize two main observations in general for the Indian population: (i) despite being warned of the commercial malpractices associated with their personal data in the mobile and IoT age, they prefer to trade data for incentives, and (ii) the willingness of individuals to trade personal data is statistically heavy-tailed, and hints at following a weak power-law. Performance Of D2D/NB-IoT Communications In Urban And Suburban Environments Rodolfo Leonardo Sumoza Matos (Universidad de Buenos Aires - FCEyN - DC, CONICET - CSCUniversidad de Buenos Aires - FCEyN - DC) and Emmanuel Lujan and Esteban Eduardo Mocskos (Universidad de Buenos Aires) Abstract Abstract Smart cities are witnessing exceptional growth in their connections, increasing the need for LPWA communications, i.e. low-bitrate, coverage enhancement, ultra-low power consumption, and massive terminal access. 5G Narrowband-IoT, has emerged to satisfy these requirements. Notwithstanding, it presents limitations in extreme coverage scenarios, where devices can lose connectivity unnecessarily. The addition of Device-to-Device (D2D), thus, connecting out-of-coverage devices with a base station through a relay, is a solid approach for mitigating these issues. More comprehensive performance and sensitivity analyses are required to achieve this goal. This study targets two typical scenarios, urban and suburban, measuring the impact of the duty cycle, path-loss, retransmissions and interference, regarding the expected delivery ratio, end-to-end delay, and the QoS. Our simulations show how the behavior of these quantities leads to a novel strategy to avoid disconnection. Spatial Models and Masks in Indoor Analysis for the Spread of COVID-19 Zein Hajj-Ali and Gabriel A. Wainer (Carleton University) Abstract Abstract Face masks have been shown to slow or stop the spread of airborne COVID-19 droplets and aerosols. There is an apparent lack of research examining the effect of different types of masks used at the same time, and their impact on the spread of viral particles in a spatial sense. We introduce a rapid prototype model to overcome the issues in the available research using the Cell-DEVS formalism. We also build scenarios for the model to examine the effectiveness of all types of masks and respirators recommended by the World Health Organization on the spread of viral particles in an indoor environment. Technical Session, Introductory Tutorial · Introductory Tutorials [Virtual] Agent-Based Modeling and Simulation for Management Decisions: A Review and Tutorial Chair: Giovanni Lugaresi (Politecnico di Milano) Agent-Based Modeling and Simulation for Management Decisions: A Review and Tutorial Bhakti Stephan Onggo (University of Southampton) and Joël Foramitti (Universitat Autònoma de Barcelona) Abstract Abstract Agent-based modeling and simulation (ABMS) has become one of the most popular simulation methods. It has been applied to a wide range of application areas including business and management. This article introduces ABMS and explains how it can support management decision making. It covers key concepts and the modeling process. AgentPy is used to show the software implementation of the concepts. This article also provides a literature review on ABMS in business and management research using bibliometric analysis and content analysis. It shows that there has been an increase in the research that uses ABMS and identifies several research clusters across management disciplines such as strategic management, marketing management, operations and supply chain management, financial management, and risk management. Technical Session · Data Science for Simulation [Virtual] DSS 1 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Detecting Communities and Attributing Purpose to Human Mobility Data Best Contributed Applied Paper - Finalist Esther John, Katherine Cauthen, and Nathanael Brown (Sandia National Laboratories) and Linda Nozick (Cornell University) Abstract Abstract Many individuals’ mobility can be characterized by strong patterns of regular movements and is influenced by social relationships. Social networks are also often organized into overlapping communities which are associated in time or space. We develop a model that can generate the structure of a social network and attribute purpose to individuals’ movements, based solely on records of individuals’ locations over time. This model distinguishes the attributed purpose of check-ins based on temporal and spatial patterns in check-in data. Because a location-based social network dataset with authoritative ground-truth to test our entire model does not exist, we generate large scale datasets containing social networks and individual check-in data to test our model. We find that our model reliably assigns community purpose to social check-in data, and is robust over a variety of different situations. Input Data Modeling: An Approach Using Generative Adversarial Networks Best Contributed Theoretical Paper - Finalist José Arnaldo Barra Montevechi, Afonso Teberga Campos, Gustavo Teodoro Gabriel, and Carlos Henrique Santos (Universidade Federal de Itajubá) Abstract Abstract Input data modeling varies according to the modeler's objectives and may be a simple or complex task. Despite great advances in data collection techniques, the input data analysis remains a challenge, especially when the input data is complex and cannot be modeled by standard solutions offered by commercial simulation software. Therefore, this paper focuses on how Generative Adversarial Networks (GANs) may support input data modeling, especially when traditional approaches are insufficient or inefficient. We evaluate the adoption of GANs for modeling correlated data as well as independent and identically distributed data. As results, GAN input models were able to generate highly accurate synthetic samples (average accuracies > 97.0%). For univariate distributions, we found no significant difference between standard and GAN input models performances. On the other hand, for correlated data, GAN input models outperformed standard ones. The most relevant accuracy gain was observed for the bivariate normal. Electoral David-vs-Goliath: Probabilistic Models of Spatial Distribution of Electors to Simulate District-based Election Outcomes Adway Mitra (IIT Kharagpur) Abstract Abstract In district-based elections, electors cast votes in their respective districts. In each district, the party with maximum votes wins the corresponding “seat” in the governing body. The election result is based on the number of seats won by different parties. In this system, locations of electors across the districts may severely affect the election result even if the total number of votes obtained by different parties remains unchanged. A less popular party may end up winning more seats if their supporters are suitably distributed spatially. In this paper, we frame the spatial distribution of electors of a multi-party system in a probabilistic setting, and consider different models to simulate election results, while capturing various properties of realistic elections. We use Approximate Bayesian Computation (ABC) framework to estimate model parameters. We show that our model can reproduce the results of elections held in India and USA, and also produce counterfactual scenarios. Panel · Simulation and Philosophy [Virtual] Panel on Ethical Considerations for Validation Chair: Andreas Tolk (The MITRE Corporation) Panel on Ethical Constraints on Validation, Verification, and Application of Simulation Andreas Tolk (The MITRE Corporation), Justin E. Lane (ALAN Analytics s.r.o), F. LeRon Shults (University of Agder), and Wesley J. Wildman (Boston University) Abstract Abstract Today’s challenges must be addressed as socio-technical systems, including insights from the social sciences and humanities to adequately represent the human components. As results of simulations are increasingly driving and justifying political and social decisions, it is important to validate and verify (V&V) simulation and data. However, the understanding of what establishes truth and how these views impact validation differ between the social and technical partners. Therefore, we must expand our view of V&V. The panel provides various use cases and derives ethical questions related to supporting universities during the COVID-19 pandemic, creating multi-disciplinary teams with diverse viewpoints, challenges of using validated insights without critical evaluation, and lack of broadly accepted scientific measures to connect social models and empirical data. We conclude that the role of V&V must be reemphasized, that its social-theoretical implications must be better understood, and that it should be driven by an overarching metaethical framework. Technical Session · Environment and Sustainability Applications [Virtual] Environmental and Sustainability Applications 2 Chair: Adrian Ramirez Nafarrate (ITAM) Dynamic Modeling and Sensitivity Analysis of a Stratified Heat Storage Coupled with a Heat Pump and an Organic Rankine Cycle Daniel Scharrer, Marco Pruckner, Peter Bazan, and Reinhard German (FAU Erlangen-Nuremberg) Abstract Abstract The storage of electrical energy is becoming increasingly important to satisfy the demand through renewable energy sources. In this paper, two simulations of a pumped thermal energy storage system are compared with respect to their computational time and accuracy. In order to keep the complexity of the simulations as low as possible, the stratified heat storage is modelled using one-dimensional considerations and abstractions. The System Dynamics model shows a slightly better abstraction of the storage, but requires 60 times the computation time compared to a discrete simulation with fixed timestep. The sensitivity analysis shows that thermodynamic parameters of the storage fluid have very little influence on the overall result. The determining factor regarding losses in the storage and possible savings for users is the type and thickness of the insulation layer of the thermal storage. DiSH-trend: Intervention Modeling Simulator That Accounts for Trend Influences Stefan Andjelkovic and Natasa Miskov-Zivanov (University of Pittsburgh) Abstract Abstract Simulation on directed graphs is an important method for understanding the dynamics in the systems where connectivity graphs contain cycles. Discrete Stochastic Heterogeneous Simulator (DiSH) is one of the simulation tools with wide application, which uses regulator values to calculate state updates of regulated elements. Here we present a new simulation approach DiSH-trend which also takes into account the trends in regulating elements. We demonstrate the features of trend-based regulation, as well as hybrid regulation, which is a combination of the trend- and level-based approaches. The modeling capabilities are demonstrated on a small toy model, showcasing different functionalities. Real-world capabilities are demonstrated on a larger network model of food insecurity in the Ethiopian region Oromia. Adding trend-based regulation to models results in increased modeling flexibility, and hybrid regulation improves qualitative dynamic behavior prediction. With appropriate data, DiSH-trend becomes a powerful tool for exploring intervention strategies. Analyzing the Charging Capacity of Electric Vehicles for Interurban Travel using Simulation Adrian Ramirez-Nafarrate, Juan Carlos Grayeb Pereira, Francisco J. Ruiz Barajas, and Hugo Briseño (Universidad Panamericana) and Ozgur M. Araz (University of Nebraska-Lincoln) Abstract Abstract The adoption of electric vehicles (EVs) has been increasing around the world in recent years. EVs present many advantages for sustainability. However, they have some drawbacks, including the high upfront cost, low range of some models and the availability of chargers. Hence, traveling long distances might be compromising for EVs. In this paper, we present a simulation-based study to analyze the charging capacity between two main cities in Mexico. Although the sales of EVs in Mexico are increasing, the number of this type of vehicles in the roads is relatively very low. In consequence, the charging infrastructure might not be sufficient to complete long trips given the large extension of the country. The modeling approach proposed in this paper helps identifying areas where new charging stations are needed to complete long trips. Furthermore, the results reveal a high correlation in the congestion of neighboring charging stations. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Retail Logistics Chair: Lieke de Groot (Belsimpel) Environmental Sustainability as Food for Thought! Simulation-based Assessment of Fulfillment Strategies in the e-Grocery Sector Marvin Auf der Landwehr, Maik Trott, and Christoph von Viebahn (Hochschule Hannover) Abstract Abstract Environmental sustainability is among the key concerns of our time. Especially the traffic and transportation sector entails a high degree of negative consequences on sustainability metrics such as CO2 emissions, increasing the need for innovative concepts to cope with the requirements of our modern society, while at the same time decreasing environmental pollution. A potential measure in this regard are grocery deliveries (e-grocery), which can achieve economies of scale by bundling orders. Within the last two decades, multiple e-grocery concepts have evolved in operational practice, which we assess by means of a comprehensive simulation study, to guide future systematic investigation on (simulation-based) sustainability research. The concrete results of our study indicate that grocery deliveries by courier, express and parcel organizations can outperform fulfillment strategies based on insourcing by up to 50 % in terms of mileages as well as 39 % and 66 % regarding CO2 and PM 2.5 emissions. Simulation and Optimization Framework for On-demand Grocery Delivery Siddhartha Paul and Goda Doreswamy (Swiggy, Bundl Technologies) Abstract Abstract This paper presents a generic two-stage optimization model and a simulation framework for on-demand grocery delivery. The proposed simulation framework can be used to reproduce any historical day's behaviour or evaluating various what-if scenarios. The two-stage optimization model's objective is to minimize the Cost Per Delivery (CPD) and maximize Customer Experience (CX). In the first stage, Last-Mile (LM) delivery optimization is modelled as a Pickup and Delivery Problem with Time Windows (PDPTW). Both static and dynamic variants of the PDPTW model are tried out and compared to their performances. A Just-In-Time (JIT) heuristic is integrated with PDPTW to minimize wait time. The second stage solves the First-Mile (FM) delivery optimization using a multi-objective assignment model for trading off between the CPD and CX. The proposed framework is simulated with actual order data and found the dynamic PDPTW model results in significant savings in CPD while maintaining a good CX. Developing a Calibrated Discrete Event Simulation Model of Shops of a Dutch Phone and Subscription Retailer During COVID-19 to Evaluate Shift Plans to Reduce Waiting Times Lieke de Groot (Belsimpel) and Alexander Hübl (University of Groningen) Abstract Abstract Belsimpel deals with large waiting times during peak times under COVID-19 circumstances compared to non-COVID-19 times. Therefore, a discrete event simulation model of the Belsimpel shops has been developed. However, the model is not validated due to data scarcity. This paper proposes a model calibration procedure based on the idea that service times decrease during high-demanded hours and increase otherwise. The results show that the proposed procedure enables the generation of realistic Key Performance Indicator values. The calibrated simulation model can be used for analyzing the performance of possible improvements. Accordingly, the calibrated model is applied to investigate the impact of an improved employee scheduling. The results show that the mean waiting time decreases by 20–33 %, the maximum waiting time decreases by 12–20 %, and the mean service level increases by 3–11 %. These improvements enhance customer satisfaction while scheduling the same number of working hours. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Local Transport Chair: Bhakti Stephan Onggo (University of Southampton) Solving an Urban Ridesharing Problem with Stochastic Travel Times: A Simheuristic Approach Leandro do C. Martins (Universitat Oberta de Catalunya), Maria Torres and Elena Perez-Bernabeu (Universitat Politècnica de València), Canan G. Corlu (Boston University), Angel A. Juan (Universitat Oberta de Catalunya), and Javier Faulin (Public University of Navarre) Abstract Abstract Ridesharing and carsharing concepts are redefining mobility practices in cities across the world. These concepts, however, also raise noticeable operational challenges that need to be efficiently addressed. In the urban ridesharing problem (URSP), a fleet of small private vehicles owned by citizens should be coordinated in order to pick up passengers on their way to work, hence maximizing the total value of their trips while not exceeding a deadline for reaching the destination points. Since this is a complex optimization problem, most of the existing literature assumes deterministic travel times. This assumption is unrealistic and, for this reason, we discuss a richer URSP variant in which travel times are modeled as random variables. Using random travel times also forces us to consider a probabilistic constraint regarding the duration of each trip. For solving this stochastic optimization problem, a simheuristic approach is proposed and tested via a series of computational experiments. A Simulation-Based Approach to Compare Policies and Stakeholders' Behaviors for the Ride-Hailing Assignment Problem Ignacio Ismael Erazo Neira (Georgia Institute of Technology) and Rodrigo De la Fuente (Universidad de Concepcion) Abstract Abstract This study focused on the ride-hailing assignment problem, aiming to optimize drivers’ behaviors with respect to simultaneous objectives such as maximizing service level, minimizing CO2 emissions, and minimizing riders’ waiting times. Four different policies were proposed and tested with a real-world case study. With respect to current literature, we present a more realistic simulation model, capturing all characteristics of a ride-hailing system and using road networks to approximate real-time road conditions. Furthermore, it is the first work that tests the effects of different passengers’ arrival conditions and analyzes the multiple objectives for different zones of a large city. Results suggest that different passengers’ arrival conditions affect the four proposed policies nearly identically. Finally, the policy of drivers remaining static instead of driving while searching for passengers had the highest service level and lowest average distance per ride. Waste Collection of Medical Items under Uncertainty Using Internet of Things and City Open Data Repositories: A Simheuristic Approach Mohammad Peyman, Yuda Li, Rafael D. Tordecilla, Pedro J. Copado-Méndez, and Angel A. Juan (Open University of Catalonia) and Fatos Xhafa (Polytechnic University of Catalonia) Abstract Abstract In a pandemic situation, a large quantity of medical items are being consumed by citizens globally. If not properly processed, these items can be pollutant or even dangerous. Inspired by a real case study in the city of Barcelona, and assuming that data from container sensors are available in the city open repository, this work addresses a medical waste collection problem both with and without uncertainty. The waste collection process is modeled as a rich open vehicle routing problem, where the constraints are not in the loading dimension but in the maximum time each vehicle can circulate without having to perform a mandatory stop, with the goal of minimizing the time required to complete the collection process. To provide high-quality solutions to this complex problem, a biased-randomized heuristic is proposed. This heuristic is combined with simulation to provide effective collection plans in scenarios where travel and pickup times are uncertain. Plenary · MASM: Semiconductor Manufacturing MASM Keynote: Chip Technology Innovations and Challenges for Process Tool Scheduling and Control Chair: Hyun-Jung Kim (KAIST) Chip Technology Innovations and Challenges for Process Tool Scheduling and Control Tae-Eog Lee (KAIST) Abstract Abstract The semiconductor manufacturing industry has made extreme technology innovations in circuit width shrinkage, complex 3D chip architecture and high-rise circuit layer stacking, and wafer size increase. These have significantly increased not only process complexity & preciseness and quality & investment risks, but also operational complexity in fabs and process tools. Fab scheduling should cope with more metrology and yield loss, more tool maintenance and tuning, queue time management and complex time constraints between process stages, more restrictions of assigning lots to tools, full automation of wafer lot transfer and direct delivery between process tools, tighter coupling between lot scheduling and material transfer, higher variability and instability in work-in-progress (WIP) and higher WIP imbalance between process stages, etc. We, therefore, need new ideas and approaches for fab scheduling. We introduce ideas and progress in recent tool scheduling and control, and future directions. Technical Session [Virtual] Insights Chair: Joe Viana (BI Norwegian Business School, Department of Accounting and Operations Management) Getting Insight into Noise, Vibration, and Harshness Simulation Data Kresimir Matkovic (VRVis Austria); Denis Gracanin (Virginia Tech); Rainer Splechtna (VRVis Austria); Goran Todorovic, Stanislav Goja, and Boris Bedic (AVL-AST d.o.o.); and Helwig Hauser (University of Bergen) Abstract Abstract Due to ever stricter noise regulations and increasing passenger comfort requirements, noise, vibration, and harshness (NVH) simulation remains very important in the automotive industry. Noise simulation data analysis is challenging due to the data size and complexity. The Campbell diagram is a standard tool for noise analysis that summarizes noise for all frequencies and all operational points. We propose an interactive approach to a detailed analysis of the noise simulation data by means of interactive visualization. This approach, as an addition to the conventional analysis, provides additional insight into the data and facilitates the understanding of the simulated phenomenon. We deploy a coordinated multiple views configuration consisting of a parallel coordinates view, a 3D view, and three multiview projections. The approach is illustrated on the basis of NVH simulation data for a four cylinder internal combustion engine. Assessing Resilience of Medicine Supply Chain Networks to Disruptions: A Proposed Hybrid Simulation Modeling Framework Joe Viana and Kim van Oorschot (BI Norwegian Business School) and Christine Årdal (Norwegian Institute of Public Health) Abstract Abstract The objective of the proposed hybrid simulation modeling framework is to improve the understanding and operation of medicine supply chains, to strengthen their resilience to ensure the availability of medicines. The framework draws upon hybrid simulation, supply chain resilience and medicine supply chain literature. The utility of the proposed framework is presented through the development of a case model of a generic (off-patent) case medicine in the Norwegian system to perform scenario-based experiments on disruption events and interventions. Two disruption scenarios are evaluated a demand shock e.g., hoarding, and a supply shock, e.g., a major disruption at a key supplier. The effect of these disruptions on the system without interventions is compared with proactive and reactive interventions, namely prepositioned stock, and flexible ordering. Future directions for framework development have been identified. Warehouse Storage Assignment for Surface Mount Components Using an Improved Genetic Algorithm Shin Woong Sung, Seungmin Jeong, Siwoo Park, Eoksu Sim, and Chong Keun Kim (Samsung Electronics) Abstract Abstract The assembly process of surface-mount device (SMD) usually requires over hundreds of types of surface-mount components (SMC). A set of SMCs should be picked in the warehouse to be supplied to the production line. We define a storage location assignment problem for SMC considering a periodic production plan to improve the efficiency of the SMC-picking operation. We propose a solution approach based on Genetic Algorithm (GA) to solve a reduced problem that finds the optimal allocation sequence of each type of SMC. We generate the initial population based on the characteristics of the SMCs, such as the production plan and the bill of material (BOM). A simulation model based on AutoMod is used to compare the performances of the proposed algorithm and some practical legacy methods with the empirical data. The simulation results demonstrate that the proposed algorithm is feasible and efficient in terms of picking distance. Technical Session · Data Science for Simulation [Virtual] DSS 4 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Combining Simulation and Machine Learning for Response Time Prediction for the Shortest Remaining Processing Time Discipline Jamol Pender, Sukriti Sudhakar, and Eva Zhang (Cornell University) Abstract Abstract Data centers have become a large part of our world and infrastructure and now many of them want toprovided information to their customers about the response times of their jobs. To this end, it is importantto be able to predict what the response times of jobs might be when a data center implements a specificqueueing discipline. In this paper, we investigate the feasibility of reliably predicting response times inreal time of the single server shortest remaining processing time queue or the G/G/1/SRPT queue. Ourproposed prediction methodology is to combine stochastic simulation with supervised learning machinelearning algorithms. We consider several forms of state information like the job size or the number of otherjobs in the system at the time of arrival. We hope that our methodology can be replicated for prediction purposes in other queueing systems. Learning the Tandem Lindley Recursion Sergio David Palomo and Jamol Pender (Cornell University) Abstract Abstract In this paper, we attempt to learn the tandem version of Lindley's recursion directly from data. We combine stochastic simulation with current machine learning methods such as Gaussian Processes, K-Nearest Neighbors, Linear Regression, Deep Neural Networks, and Gradient Boosted Trees to learn the tandem network Lindley recursion. We uncover specific parameter regimes where learning the tandem network Lindley recursion is easy or hard. Uniting Simulation and Machine Learning for Response Time Prediction in Processor Sharing Queues Jamol Pender and Elena Zhang (Cornell University) Abstract Abstract Processor sharing queues are used in a variety of settings in telecommunications and internet applications and are known for being fair. In this paper, we study the possibility of accurately predicting the response times in real time of the G/G/1 processor sharing queue. To this end, we combine stochastic simulation with supervised learning machine learning methods. We show that many of the current machine learning algorithms can perform well at predicting response times in real-time. GraphTrans: A Software System for Network Conversions for Simulation, Structural Analysis, and Graph Operations Chris J. Kuhlman, Henry Carscadden, Lucas Machi, Dustin Machi, and S. S. Ravi (University of Virginia) Abstract Abstract Network representations of socio-physical systems are ubiquitous, examples being social (media) networks and infrastructure networks like power transmission and water systems. The many software tools that analyze and visualize networks, and carry out simulations on them, require different graph formats. Consequently, it is important to develop software for converting graphs that are represented in a given source format into a required representation in a destination format. For network-based computations, graph conversion is a key capability that facilitates interoperability among software tools. This paper describes such a system called GraphTrans to convert graphs among different formats. This system is part of a new cyberinfrastructure for network science called net.science. We present the GraphTrans system design and implementation, results from a performance evaluation, and a case study to demonstrate its utility. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Complex Logistics Systems Chair: Ralf Elbert (Technische Universität Darmstadt) Network Generation for Simulation of Multimodal Logistics Systems Robert van Steenbergen, Matteo Brunetti, and Martijn Mes (University of Twente) Abstract Abstract Simulation of multimodal logistics systems might require realistic modeling of the transportation networks (road, rail, air, and waterways), e.g., when evaluating the use of Automated Guided Vehicles (AGVs) on public roads or the combined use of trucks and Unmanned Aerial Vehicles (UAVs) in humanitarian logistics with disturbed infrastructure. In this paper, we propose a simulation add-on to automatically generate infrastructure networks for multimodal logistics, including logistics locations (e.g., warehouses and terminals) and various transport modes (e.g., trucks, AGVs, and UAVs) with corresponding behavior. The proposed methodology allows for various levels of accuracy and opens up possibilities for simulating physical flows of various transport modes, congestion, stochastic behavior of the network, and variable transport demand over time in a simple, quick, and accurate way. We illustrate our approach using two case studies corresponding to the examples given above with AGVs and UAVs. A Farming-for-Mining-Framework to Gain Knowledge in Supply Chains Joachim Hunker, Alexander Wuttke, Anne Antonia Scheidler, and Markus Rabe (Technische Universität Dortmund) Abstract Abstract Gaining knowledge from a given data basis is a complex challenge. One of the frequently used methods in the context of a supply chain (SC) is knowledge discovery in databases (KDD). For a purposeful and successful knowledge discovery, valid and preprocessed input data are necessary. Besides preprocessing collected observational data, simulation can be used to generate a data basis as an input for the knowledge discovery process. The process of using a simulation model as a data generator is called data farming. This paper investigates the link between data farming and data mining. We developed a Farming-for-Mining-Framework, where we highlight requirements of knowledge discovery techniques and derive how the simulation model for data generation can be configured accordingly, e.g., to meet the required data accuracy. We suggest that this is a promising approach and is worth further research attention. A Cascading Online-Simulation Framework to Optimize Installation Cycles for Offshore Wind Farms Daniel Rippel and Michael Lütjen (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen), Helena Szczerbicka (Leibnitz University Hannover), and Michael Freitag (University of Bremen) Abstract Abstract Offshore wind energy constitutes a promising technology to achieve the world’s need for sustainable energy. However, offshore wind farm installations require sophisticated planning methods due to increasing resource demands and the processes’ high dependence on viable weather conditions. Current literature provides several models that either provide strategic or tactical decision support using historical data or operative support using current measurements and forecasts. Unfortunately, models of the first type cannot support the operative level. In contrast, the second type provides decision support using local, short-term optimizations that do not consider these decisions’ effect on the overall installation project. This article proposes a cascading online-simulation concept that optimizes local decisions using current data. However, it estimates the effects of each decision using nested simulation runs and aggregates of historical data. The results show that this approach achieves a good trade-off between the project’s duration and cost-inducing delays at comparably low computational costs. Technical Session · Using Simulation to Innovate [Virtual] Simulation Analytics for Smart Digital Twin Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) A Unified Offline-Online Learning Paradigm via Simulation for Scenario-dependent Selection Haitao Liu, Xiao Jin, Haobin Li, Loo Hay Lee, and Ek Peng Chew (National University of Singapore) Abstract Abstract Simulation has primarily been used for offline static system design problems, and the simulation-based online decision making has been a weakness as the online decision epoch is tight. This work extends the scenario-dependent ranking and selection model by considering online scenario and budget. We propose a unified offline-online learning (UOOL) paradigm via simulation to find the best alternative conditional on the online scenario. The idea is to offline learn the relationship between scenarios and mean performance, and then dynamically allocates the online simulation budget based on the learned predictive model and online scenario information. The superior performance of UOOL paradigm is validated on four test functions by comparing it with artificial neural networks and decision tree. On the Convergence of Optimal Computing Budget Allocation Algorithms Yanwen Li and Siyang Gao (City University of Hong Kong) Abstract Abstract This paper considers a well-known ranking and selection (R\&S) framework, called optimal computing budget allocation (OCBA). This framework includes a set of equations that optimally determine the number of samples allocated to each design in a finite design set. Sample allocations that satisfy these equations have been shown to be the asymptotic optimizer of the probability of correct selection (PCS) for the best design and the expected opportunity cost (EOC) if false selection occurs. In this paper, we analyze two popular OCBA algorithms and study their convergence rates, assuming known variances for samples of each design. It fills the gap of convergence analysis for algorithms that are developed based on the OCBA optimality equations. In addition, we propose modifications of the OCBA algorithms for cumulative regret, an objective commonly studied in machine learning, and derive their convergence rates. Last, the convergence behaviors of these algorithms are demonstrated using numerical examples. Three Carriages Driving the Development of Intelligent Digital Twins – Simulation plus Optimization and Learning Haobin Li, Xinhu Cao, Xiao Jin, Loo Hay Lee, and Ek Peng Chew (National University of Singapore) Abstract Abstract Three key technologies are driving the development of intelligent decisions in the era of Industry 4.0. These technologies are machine learning, optimization, and simulation. It shows that solely relying on one technology is not able to meet the decision timeliness and accuracy requirement when solving current industry decision problems. Thus, to meet this challenge, this paper firstly discusses several possible integrations among the three technologies, in which simulation plays an important role in depicting the system models, generating data for optimization and learning, and validating optimized decisions and learned rules. A number of future research directions are pointed out based on the gap between the current technology / tools development and the industry needs. Finally, the paper proposes a possible collaboration mode among higher learning institutes, research institutes, equipment and platform developers, as well as end-users for better shaping the whole intelligent decision ecosystem. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Public Disaster Management Chair: Reha Uzsoy (North Carolina State University) Supporting Hospital Logistics during the First Months of the COVID-19 Crisis: A Simheuristic for the Stochastic Team Orienteering Problem Markus Rabe (Technical University Dortmund); Rafael D. Tordecilla (Universitat Oberta de Catalunya, Universidad de La Sabana); Leandro do C. Martins (Universitat Oberta de Catalunya); Jorge Chicaiza-Vaca (Technical University Dortmund); and Angel A. Juan (Universitat Oberta de Catalunya) Abstract Abstract The unexpected crisis posed by the COVID-19 pandemic in 2020 caused that items such as face shields and ear savers were highly demanded. In the Barcelona area, hundreds of volunteers employed their home 3D-printers to produce these elements. After the lockdown, they had to be collected by a reduced group of volunteer drivers, who transported them to several consolidation centers. These activities required a daily agile design of efficient routes, especially considering that routes should not exceed a maximum time threshold to minimize drivers’ exposure. These constraints limit the number of houses that could be visited. Moreover, travel and service times are considered as random variables. This logistics challenge is modeled as a stochastic team orienteering problem. Our main performance indicator is the collected reward, which should be maximized. This problem is solved by employing a biased-randomized simheuristic algorithm, which is capable of generating high-quality solutions in short computing times. Relief Food Supply Network Simulation Bhakti Stephan Onggo and Christine Currie (University of Southampton) and Tomy Perdana, Gheo Fauzi, Audi Achmad, and CIpta Endyana (Universitas Padjadjaran) Abstract Abstract Research into simulation modelling to support disaster management has focused on large disasters. In some regions, there are frequent small to medium scale disasters, which require daily decisions to be made. These are typically described as routine emergencies. For example, in Indonesia’s West Java province, on average, there were 4.6 disasters per day between 2016 and 2020. This paper presents a simulation model of relief food distribution to refugees in a region that is vulnerable to multiple disasters on daily basis. To illustrate how the model can support disaster management decision making, we use the West Java case. The model demonstrates that the current warehouse locations and routing heuristic can cope with the demand; however, improvements are needed to cope with an expected increase in the demand due to increased number of disasters as a result of climate change and a growing population. A Data Processing Pipeline for Cyber-Physical Risk Assessments of Municipal Supply Chains Gabriel Arthur Weaver (University of Illinois at Urbana-Champaign) Abstract Abstract Smart city technologies promise reduced congestion by optimizing transportation movements. Increased connectivity, however, may increase the attack surface of a city's critical functions. Increasing supply chain attacks (up by nearly 80 % in 2019) and municipal ransomware attacks (up by 60 % in 2019) motivate the need for holistic approaches to risk assessment. Therefore, we present a methodology to quantify the degree to which supply chain movements may be observed or disrupted via compromised smart-city devices. Our data-processing pipeline uses publicly available datasets to model intermodal commodity flows within and surrounding a municipality. Using a hierarchy tree to adaptively sample spatial networks within geographic regions of interest, we bridge the gap between grid- and network-based risk assessment frameworks. Results based on fieldwork for the Jack Voltaic exercises sponsored by the Army Cyber Institute demonstrate our approach on intermodal movements through Charleston, SC and San Diego, CA. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Last Mile Logistics Chair: Canan Gunes Corlu (Boston University) A Hybrid Modeling Approach for Automated Parcel Lockers as a Last-Mile Delivery Scheme: A Case Study in Pamplona (Spain) Adrian Serrano-Hernandez (Public University of Navarre, Institute of Smart Cities); Sergio Martinez-Abad and Aitor Ballano (Public University of Navarre); Javier Faulin (Public University of Navarre, Institute of Smart Cities); and Markus Rabe and Jorge Chicaiza-Vaca (Technical University Dortmund) Abstract Abstract Recently, last-mile distribution in cities has been constrained by fast-delivery options, hard time windows,and no-show-up customers. The promotion of automated parcel locker (APL) systems is seen as a way tomitigate the aforementioned problems. Thus, this paper analyzes APL users in the city of Pamplona (Spain), and proposes the use of hybrid modeling for APL network design. Moreover, agent-based modeling is usedto estimate future demand based on a number of socio-economic parameters, i.e., population as well as therates of online users, e-commerce growth, and others. Likewise, the APL location optimization model isdynamically executed within the simulation framework to minimize the operational and service costs. Our hybrid methodology forecasts an increase of eShoppers by 10% while the number of APLs increases upto 500% in a 3-year time horizon. In light of those results, the use of simulation and optimization tools leverages the promotion of APLs as a last-mile distribution scheme. Parcel Delivery for Smart Cities: A Synchronization Approach for Combined Truck-Drone-Street Robot Deliveries Berry Gerrits and Peter Schuur (University of Twente) Abstract Abstract The past decade saw many novel concepts for last-mile delivery. Particularly interesting is the concept in which trucks dispatch – from designated points – drones or robots that do the actual delivery. Generally, the role of the truck driver is underexposed. This paper proposes a last-mile delivery concept in which a truck carries a mixed fleet of drones and robots, and where the driver also has an active role in the delivery process. Our aim is to efficiently align the service time required by the truck driver with the back-and-forth delivery times of the drones and the robots. To this end, we design a street delivery model using continuous approximation. We evaluate our approach using a flexible and reusable simulation model to enable tactical decision-making for logistics service providers to determine which neighborhoods are suitable for drones, street robots, or combinations, in terms of makespan and energy consumption. A Genetic Algorithm Simheuristic for the Open UAV Task Assignment and Routing Problem with Stochastic Traveling and Servicing Times Alfons Freixes (Open University of Catalonia, Euncet Business School); Angel A. Juan, Pedro Jesus Copado-Mendez, and Javier Panadero (Open University of Catalonia); Carles Serrat (Universitat Politècnica de Catalunya-BarcelonaTECH); and Juan Francisco Gómez González (Open University of Catalonia) Abstract Abstract Due to their flexibility, unmanned aerial vehicles (UAVs) are gaining importance in transportation and surveillance activities. The usage of UAV swarms raises the need for coordination and optimization of task assignments. Some of these operations can be modeled as team orienteering problems (TOP). This paper analyzes an open TOP in which a given fleet of homogeneous UAVs, initially located at a single depot, need to be coordinated in order to maximize the collection of rewards from visiting nodes without exceeding a maximum operation time. As in most real-life applications, both traveling times and servicing times at each node are modeled as random variables. To solve this NP-hard and stochastic optimization problem, a simheuristic based on the combination of a genetic algorithm with Monte Carlo simulation is proposed. Technical Session · Project Management and Construction [Virtual] Scheduling and Dynamic Simulation Chair: Fang Xu (University of Florida) Enhanced Resource Scheduling Framework for Industrial Construction Projects Maedeh Taghaddos (University of Alberta), Hosein Taghaddos (University of Tehran), Yasser Mohamed (University of Alberta), Ulrich Hermann (PCL Industrial Management Inc.), and Simaan AbouRizk (University of Alberta) Abstract Abstract The time and cost required to optimize resource assignments in industrial construction is exacerbated by the size, complexity, and specialized requirements of these projects. This study introduces an automated, simulation-based scheduling method to enhance, accelerate, and facilitate variable resource allocation in industrial construction. The proposed framework links a time-stepped simulation engine to an integrated database management system containing project information and historical data. The developed system auto-generates an efficient schedule respecting project constraints and uncertainties, such as limited resource availability and variable labor resources based on historical data, calendars, and shifts. Graph theory algorithms are used to optimize variable resource allocation in the time-stepped simulation, resulting in the leveling of resource histograms and, consequently, the generation of an efficient project schedule. Applying the proposed framework to an illustrative example demonstrated its capabilities in generating efficient schedules based on variable resource allocation constraints. Analyzing Impact Of Semi-Productive Work Hours In Scheduling And Budgeting Labor-Intensive Projects: Simulation-Based Approach Leila Zahedi (University Of Alberta, University of Alberta); Ming Lu (University of Alberta); and Todd Collister (Supreme Steel) Abstract Abstract This research investigates labor productivity based on resource-constrained project scheduling simulation models in order to render analytical decision support in planning crew size and worker-activity allocation for steel girder fabrication projects. In the dynamic environment of a structural steel fabrication facility, each laborer (journeyman) is part of teams temporarily formed at particular workstations to conduct various material-handling and connection activities. Discrete-event-simulation-based resource-constrained scheduling analysis is instrumental in analyzing semi-productive work hours resulting from labor transferring between activities and crew matching. In the case study, semi-productive work hours can be lowered from about one half of the total working time to a third by fine-tuning the crew size and work sequencing based on the simulation model, thereby resulting in enhancements on the time and cost performances of the entire project. Dynamic, Data-Driven Simulation in Construction Using Advanced Metadata Structures and Bayesian Inference Ramzi Roy Labban, Stephen Hague, Elyar Pourrahimian, and Simaan AbouRizk (University of Alberta) Abstract Abstract Effective project control in construction requires the rapid identification and subsequent mitigation of deviations from planned baselines and schedules. Although simulation has been used to successfully plan projects in the pre-construction phase, the use of simulation for project control during execution remains limited. Current real-time simulation strategies have difficulty self-adapting in response to deviations from planned baselines, requiring experienced simulation experts to manually update the input parameters of simulation models. This study is proposing a dynamic, data-driven simulation environment that is capable of minimizing the manual intervention required to incorporate as-built construction data in real-time by coupling newly-developed metadata structures with Bayesian inference. Still in development, an overview of the proposed simulation environment is presented, details of the advanced data structures are discussed, and preliminary functionality of the environment is demonstrated. Technical Session · Simulation Education [Virtual] Simulation Education Chair: Jakub Bijak (University of Southampton); Kristina Eriksson (University West, Sweden) An Educational Model for Competence Development within Simulation and Technologies for Industry 4.0 Kristina Eriksson, Eva Bränneby, and Monika Hagelin (University West, Sweden) Abstract Abstract In the era of industry 4.0 businesses are pursuing applications of technological developments towards increased digitization. This in turn necessitates continuous and increasing demand for competence development of professionals. This paper reports a study of the design of university courses targeted towards professionals and investigate how such an educational incentive can act as a catalyst for application of technologies for industry 4.0, including simulation. Quantitative data is collected from fifteen courses addressing the competence need in manufacturing industry, and the qualitative data includes ten focus groups with course participants from companies. The results highlight that the course design enables knowledge exchange between university and industry and between participants. Moreover the pedagogy of working on real cases can facilitate opportunities for introducing new technologies to management. The study shows that the educational incentive explored can act as a catalyst for application of simulation and technologies within industry 4.0 in manufacturing industry. TEACHING A MODELING PROCESS: REFLECTIONS FROM AN ONLINE COURSE Jakub Bijak (University of Southampton); André Grow (Max Planck Institute for Demographic Research); Philip A. Higham, Jason Hilton, Martin Hinsch, Kim Lipscombe, Sarah Nurse, and Toby Prike (University of Southampton); Oliver Reinhardt (University of Rostock); Peter WF Smith (University of Southampton); and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract The outbreak of the COVID-19 pandemic in 2020 posed unique challenges for academic and professional education, while at the same time offering opportunities related to the mass switching of the delivery of courses to the online mode. In this paper, we share the experience of organizing and delivering an online doctoral-level course on Agent-Based Modeling for Social Research. Our aim was to teach interdisciplinary content on various elements of the modeling process in a coherent and practical way. In the paper, we offer a critical assessment of different aspects of the course, related to content as well as organization and delivery. By looking at the course in the light of the current knowledge on good teaching and learning practices from the educational and psychological literature, and reflecting on the lessons learned, we offer a blueprint for designing and running complex, multi-thread simulation courses in an efficient way. Structuring a Simulation Course Around the simEd Package for R Barry Lawson (Bates College) and Lawrence Leemis (William & Mary) Abstract Abstract The simEd package in R provides a number of functions — including text-based, visualization, and animation functions — that can be useful in teaching a course or a module in discrete-event simulation. This paper suggests topics and several sample exercises for such a course, demonstrating how the simEd package facilitates delivery of those topics. Technical Session · Healthcare Applications [Virtual] Operations Management and Patient Flow II Chair: Christine Currie (University of Southampton) Simulating New York City Hospital Load Balancing During COVID-19 Enrique Lelo de Larrea (Columbia University); Edward M. Dolan, Nicholas E. Johnson, and Timothy R. Kepler (FDNY); Henry Lam, Sevin Mohammadi, and Audrey Olivier (Columbia University); Afsan Quayyum (FDNY); Elioth Sanabria, Jay Sethuraman, and Andrew W. Smyth (Columbia University); and Kathleen S. Thomson (FDNY) Abstract Abstract In most emergency medical services (EMS) systems, patients are transported by ambulance to the closest most appropriate hospital. However, in extreme cases, such as the COVID-19 pandemic, this policy may lead to hospital overloading, which can have detrimental effects on patients. To address this concern, we propose an optimization-based, data-driven hospital load balancing approach. The approach finds a trade-off between short transport times for patients that are not high acuity while avoiding hospital overloading. In order to test the new rule, we build a simulation model, tailored for New York City’s EMS system. We use historical EMS incident data from the worst weeks of the pandemic as a model input. Our simulation indicates that 911 patient load balancing is beneficial to hospital occupancy rates and is a reasonable rule for non-critical 911 patient transports. The load balancing rule has been recently implemented in New York City’s EMS system. Modeling of Waiting Lists for Chronic Heart Failure in the Wake of the Covid-19 Pandemic Alan Wise (a.wise1@lancaster.ac.uk), Alexander Heib (University of Southampton), Lucy E. Morgan (Lancaster University), Christine Currie (University of Southampton), Alan Champneys (University of Bristol), Ramesh Nadarajah and Chris Gale (University of Leeds), and Mamas Mamas (Keele University) Abstract Abstract The Covid-19 pandemic has disrupted access to health services globally for patients with non-Covid-19 conditions. We consider the condition of heart failure and describe a discrete event simulation model built to describe the impact of the pandemic and associated societal lockdowns on access to diagnosis procedures. The number of patients diagnosed with heart failure fell during the pandemic and in the UK, the number of GP referrals for diagnostic tests in November 2020 were at 20% of their pre-pandemic levels. While the numbers in the system have fallen clinicians believe that this is not reflective of a change in need, suggesting that many patients are delaying accessing care during pandemic peaks. While the effect of this is uncertain, it is thought that this could have a significant impact on patient survival. Initial results reproduce the observed increase in the number of patients waiting. A Demand and Capacity Model for Home-based Intermediate Care: Optimizing the 'Step-down' Pathway Alison Harper and Martin Pitt (University of Exeter); Manon De Prez, Zehra Önen Dumlu, and Christos Vasilakis (University of Bath); and Paul Forte and Richard Wood (BNSSG CCG) Abstract Abstract Intermediate care supports timely discharge from hospital for patients with complex healthcare needs. The purpose of ‘step-down’ care is to enable patients to leave hospital as soon as medically fit, avoiding costly discharge delays and consequent risks to patient health and wellbeing. Determining optimal intermediate care capacity requires balancing costs to both acute hospital and community care providers. Too much community capacity results in underutilized resources and poor economic efficiency, while too little risks excessive hospital discharge delays. Application of discrete-time simulation shows that total costs across the acute-community interface can be minimized by identifying optimal community capacity in terms of the maximum number of patients for which home visits can be provided by the service. To our knowledge, this is the first simulation study to model the patient pathway from hospital discharge through to community visits. Simulation modeling has supported short-term resource planning in a major English healthcare system. Contributed Paper, Technical Session · Modeling Methodology [Virtual] Modeling Methodologies 1 Chair: Ezequiel Pecker Marcosig (UBA, CONICET) Availability and Stock Rupture Estimation by Using Continuous and Discrete Simulation Models Edson Luiz Ursini and Henry de Castro Lobo dos Santos (Campinas State University) and Marcelo Tsuguio Okano (Centro Paula Souza/Campinas State University) Abstract Abstract We propose a method to solve an Availability and Stock Rupture problem considering static and dynamic conditions. Information about failure and repair rates, dependence or independence of the blocks are considered, as well as knowledge about spare stock and Maintainability. The analysis is done, initially, using exponential processes, considering the transient solution by Matlab-Simulink. This analysis allows to evaluate the steady state solution and the main aspects that must be addressed through Discrete Event Simulation. We consider a situation in which it is possible to use two blocks, one in operation and one in the spare stock, with a single repairman. SLA (Service Level Agreement) contracts are common, which the parties involved sign and failure to comply with these contracts may result in a fine. An approach to improve the dimensioning of such systems in terms of their availability and Stock Rupture as close as possible to the real ones. Towards Semi-Automatic Model Specification David Shuttleworth (Old Dominion University) and Jose Padilla (Virginia Modeling, Analysis, and Simulation Center; Old Dominion University) Abstract Abstract This paper presents a natural language understanding (NLU) approach to transition a description of a phenomenon towards a simulation specification. As multidisciplinary endeavors using simulations increase, the need for teams to better communicate and make non-modelers active participants on the process increases. We focus on semi-automating the model conceptualization process towards the creation of a specification as it is one of the most challenging steps in collaborations. The approach relies on NLU processing of narratives, create a model that captures concepts and relationships, and finally provide a specification of a simulation implementation. An initial definition set and grammatical rules are proposed to formalize this process. These are followed by a Design of Experiments (DoE) to test the NLU model accuracy and a test case that generates Agent-Based Model (ABM) conceptualizations and specifications. We provide a discussion on the advantages and limitations of using NLUs for model conceptualization and specification processes. How Modeling Methods for Fuzzy Cognitive Maps Can Benefit From Psychology Research Samvel Mkhitaryan (Maastricht University, CAPHRI) and Philippe J. Giabbanelli (Miami University) Abstract Abstract Fuzzy Cognitive Maps (FCMs) are aggregate-level simulation models that represent concepts as nodes, capturing relationships via weighted edges, and apply an inference mechanism to update the nodes' values until a desired effect is achieved. FCMs are increasingly combined with other techniques. Agent-Based Models (ABMs) use FCMs to represent the `mind' of each agent, which governs (and is influenced by) interactions with other agents or the environment. A question continues to elude simulationists: what should be the building blocks for such simulations? FCMs can now be optimized using machine learning and quantitative data, which means that an agent's mind can be automatically modified to closely align with an evidence base. However, there are multiple ways in which an FCM can be transformed: which transformations correctly capture how individuals change their mind? In this paper, we explore these questions using psychology research, thus leveraging knowledge on human behaviors to inform social simulations. Technical Session · Simulation in Industry 4.0 [Virtual] Simulation as Digital Twin in Industry 4.0 Framework II Chair: Lauren Czerniak (University of Michigan) Simulation Optimization for a Digital Twin using a Multi-Fidelity Framework Yiyun Cao, Christine Currie, and Bhakti Stephan Onggo (University of Southampton) and Michael Higgins (Ford) Abstract Abstract Digital twin technology is increasingly ubiquitous in manufacturing and there is a need to increase the efficiency of optimization methods that use digital twins to answer questions about the real system. The decisions that these methods support are typically short-term operational questions and, as a result, optimization methods need to return results in real or near-to-real time. This is especially challenging in manufacturing systems as the simulation models are typically large and complex. In this article, we describe an algorithm for a multi-fidelity model that uses a simpler low-fidelity neural network meta-model in the first stage of the optimization and a high-fidelity simulation model in the second stage. Some initial experimentation suggesting that it performs well. Applying Simheuristics for Safety Stock and Planned Lead Time Optimization in a Rolling Horizon MRP System under Uncertainty Wolfgang Seiringer (University of Applied Sciences Upper Austria), Juliana Castaneda (Universitat Oberta de Catalunya), Klaus Altendorfer (University of Applied Sciences Upper Austria), and Javier Panadero and Angel Juan (Universitat Oberta de Catalunya) Abstract Abstract Material requirements planning (MRP) is one of the main production planning approaches implemented in enterprise resource planning systems, and one that is broadly applied in practice. Since customers’ demands evolve over time, the MRP method is usually applied in a rolling horizon planning, in which a safety stock and a planned lead time is usually employed to reduce the negative effects of uncertainty components in the production system or in the customers’ demands. Considering uncertainty conditions in a rolling horizon planning leads to additional difficulties in determining the optimal planning parameters. In this paper, a multi-stage and multi-item production system is simulated by considering random customers’ demands and other sources of uncertainty. With the goal of minimizing the sum of inventory and backorder costs, a simheuristic algorithm is proposed and tested. Improving Simulation Optimization Run Time When Solving for Periodic Review Inventory Policies in a Pharmacy Lauren L. Czerniak, Mark S. Daskin, Mariel S. Lavieri, Burgunda V. Sweet, Jennifer Erley, and Matthew A. Tupps (University of Michigan) Abstract Abstract Pharmaceutical drugs are critical to patient care, but demand and supply uncertainties in this inventory system make decision-making a challenging task. In this paper, we present a simulation-optimization model that determines near-optimal (s,S) periodic review inventory policies that minimize the expected cost per day. The model accounts for perishability, positive lead time, stochastic demand, and supply disruptions. We implement a Binary Grid-Search algorithm which uses the structure of the objective function to quickly solve the simulation-optimization model. The numerical results illustrate how the Binary Grid-Search algorithm performs 21 times faster (when performing 10,000 replications) in terms of run time when compared to an Exhaustive Grid-Search, without sacrificing solution accuracy. This paper provides an efficient method to solve for the near-optimal (s,S) periodic review inventory policies which is essential in the pharmacy inventory system that handles thousands of different drugs. Indolence is fatal: Research Opportunities in Designing Digital Shadows and Twins for Decision Support Teresa Marquardt (Christian-Albrechts-Universität zu Kiel), Lucy Morgan (Lancaster University Management School), and Catherine Cleophas (Christian-Albrechts-Universität zu Kiel) Abstract Abstract Digital twins and shadows have gained increasing popularity in industry and research. The terms describe simulation systems that mirror real-world systems, such as service or manufacturing lines, aligned to (near) perfection based on automated data streams. We implement a set of perfect model experiments to demonstrate how deviations between a digital shadow and the real world can arise, affect predictive accuracy, and may be eliminated. As an illustrative example, we simulate a simple sequential production line and its digital shadow. The paper concludes with a summary of identified research opportunities. Technical Session · Military and National Security Applications [Virtual] Medical Evacuation, Security Screening, and Unmanned Underwater Vehicles Chair: Danielle Morey (University of Washington) Identification of Latent Structure in Spatio-Temporal Models of Violence Nicholas Clark and Krista Watts (West Point) Abstract Abstract The modeling of violence, including terrorist activity, over space and time is often done using one of two broad classes of statistical models. Typically, the location of an event is modeled as a spatio-temporal point process and the latent structure is either modeled through a latent Gaussian process motivated by a log-Gaussian Cox process or through data dependency similar to a Hawkes process. The former is characterized through dependence in an unobserved latent Gaussian process, while the later assumes a data driven dependence in the data. While both techniques have been used successfully it remains unclear whether the processes are practically different from one another. In this manuscript, we demonstrate that in many situations, the most common statistic to characterize clustering in a process, Ripley's K function, cannot differentiate between the two processes and should not be used. Analyzing the Impact of Triage Classification Errors on Military Medical Evacuation Dispatching Policies Emily S. Graves, Phillip R. Jenkins, and Matthew J. Robbins (Air Force Institute of Technology) Abstract Abstract This paper analyzes how triage classification errors and blood transfusion kits impact military medical evacuation (MEDEVAC) system performance with regards to dispatching policies. A discounted, infinite-horizon Markov decision process (MDP) model is formulated to analyze the MEDEVAC dispatching problem. A notional, representational scenario based in Azerbaijan is utilized to compare the MDP-generated policies to current practices. Results reveal the MDP-generated dispatching policy outperforms the currently practiced dispatching policy by 2.18%. Triage classification errors negatively impact system performance between 0.6% and 2.5% for the scenarios analyzed. Moreover, the inclusion of blood transfusion kits on board aircraft increase MEDEVAC system performance between 2.83% and 4.37%, depending on which units are equipped. A Simulation-Optimization Approach to Improve the Allocation of Security Screening Resources in Airport Terminal Checkpoints Eduardo Pérez and Logan Taunton (Texas State University) and Jorge A. Sefair (Arizona State University) Abstract Abstract In this research, a simulation-optimization strategy is developed to improve the operation of airport security screening checkpoints (SSCPs). The simulation-optimization strategy aims to improve any airport’s SSCP operations by providing a flexible modeling approach to decide optimal checkpoint configurations and their corresponding workforce allocations. Simulation-optimization is a suitable framework for problems involving data uncertainties that evolve over time, requiring important system decisions to be made prior to observing the entire data stream. This is indeed the case in SSCPs, where the passenger arrival times are difficult to predict and requirements for equipment and human resources must be scheduled in advance. The team explicitly included the uncertainties associated with future passenger arrivals and availability and performance levels of the resources in computing staffing and system configuration decisions. The proposed simulation-optimization strategy provided a 31.4% improvement for the passengers’ cycle time when compared to a benchmark scenario. Multi-Fidelity Modeling for the Design of a Maritime Environmental Survey Network utilizing Unmanned Underwater Vehicles Danielle Morey (University of Washington), Randall Plate and Cherry Wakayama (Naval Information Warfare Center Pacific), and Zelda Zabinsky (University of Washington) Abstract Abstract New maritime operational concepts are being considered for future network topologies that are efficient and reliable in underwater domains where the design of networking is challenging due to the harshness of the environment. Various communication and network simulation tools exist to model scenarios of interest and evaluate metrics of interest, e.g., latency, throughput and reliability, for high-fidelity evaluations. However, the computation time required for high-fidelity simulation is extensive for evaluating many network topologies associated with topology optimization. Thus we develop low-fidelity models to explore many network topologies and identify a few Pareto optimal configurations to evaluate with the high-fidelity simulation. In this paper, we demonstrate a multi-fidelity topology optimization methodology for maritime environmental survey operations involving multiple unmanned underwater vehicles. The low-fidelity models developed for this maritime operation scenario are able to accurately identify the intuitive optimal solution based on multiple objectives, which are then validated by high-fidelity simulations. Technical Session · Military and National Security Applications [Virtual] Naval Forces, Population Dynamics, and Amphibious Platform Cost Analysis Chair: Michèle Fee (Defence Research and Development Canada) A Simulation Model to Evaluate Naval Force Fleet Mix Michèle Fee and Jean-Denis Caron (Defence Research and Development Canada) Abstract Abstract The Defence Research & Development Canada -- Centre for Operational Research and Analysis created a Discrete-Event Simulation model, called the Platform Capacity Tool (PCT), to answer questions regarding the suitability of fleet mixes (numbers and types of naval platforms) required by the Royal Canadian Navy to meet its desired operational output and to fulfill its mandate. The PCT, implemented in Arena Software, is a detailed scheduling tool that assigns platforms to random and scheduled events as they occur in the simulation. The assignment is based on event prioritization and on basic scheduling rules. The tool is flexible, allowing the user to analyze various fleet mixes and to help in answering ``what if'' type questions. This paper describes the PCT (inputs, outputs and assumptions) and presents a notional scenario involving the transition from one class of ship to two classes to demonstrate how the tool can be applied. Modelling the Mentee-Mentor Population Dynamics: Continuous and Discrete Approaches Samuel Schaffel (Defence Research and Development Canada Centre for Operational Research and Analysis), François-Alex Bourque (Defence Research and Development Canada - Centre for Operational Research and Analysis), and Slawomir Wesolkowski (Royal Canadian Air Force) Abstract Abstract In order to be effective, the military workforce needs to continually produce experienced, well-trained personnel. Personnel go through different phases: recruitment (production), experience building (absorption) and retention (avoiding attrition). Absorption in military workforce modelling is often limited to maintaining the health of an occupation. The mentoring process, the conveying of experience to junior staff by senior staff, is rarely modelled to realistically represent the absorption of new personnel under varied occupational health conditions, including high ratios of mentees to mentors. To explore this dynamic, a two-state model based on the well-known predator-prey model is introduced where mentees upgrade to mentors, and the training capacity is modulated by the number of mentors. A continuous deterministic model is compared to a discrete stochastic simulation. The models consider a hypothetical military occupation in which the mentor-mentee relationship is important for career progression. Cost Analysis for Operational and System Level Considerations for an Electromagnetic Railgun on an Amphibious Platform Christian Diaz (Naval Postraduate School), Paul Beery (Naval Postgraduate School), and Anthony Pollman (Naval Postraduate School) Abstract Abstract This article investigates outfitting an amphibious platform with an electromagnetic railgun (EMRG), which is a high velocity weapon that can fire projectiles at ranges up to 100 nautical miles. An EMRG would provide the amphibious fleet with offensive capability, as well as defensive capability against surface threats, missiles, and airborne threats. A cost estimate for railgun integration and a cost effectiveness analysis, from both an operational and system perspective, is presented. The cost estimate for EMRG integration is FY20 $134.66M, given a 32 MJ railgun. From an operational effectiveness perspective, hit probability of air targets was found to have a greater impact on performance than any other design characteristic. When balancing cost versus effectiveness, a 10 MJ railgun is preferred to a 32 or 20 MJ railgun. Future work includes modeling and simulation of various concepts of operation. Technical Session · Analysis Methodology [Virtual] Metamodeling and Simulation Optimization Chair: Haoting ZHANG (University of California, Berkeley; IEOR Department) Neural Network-Assisted Simulation Optimization with Covariates Haoting Zhang (University of California, Berkeley); Jinghai He (Shanghai Jiao Tong University); Donglin Zhan (Columbia University); and Zeyu Zheng (University of California, Berkeley) Abstract Abstract In real-time decision-making problems for complicated stochastic systems, a covariate that reflects the state of the system is observed in real time and a state-dependent decision needs to be made immediately to optimize some system performance. Such system performances, for complicated stochastic systems, often are not in closed-form and require time-consuming simulation experiments to evaluate, which can be prohibitive in real-time tasks. We propose two neural network-assisted methods to address this challenge by effectively utilizing simulation experiments that are conducted offline before the real-time tasks. One key step in the proposed methods integrates a classical simulation meta-modelling approach with neural networks to jointly capture the mapping from the covariate and the decision variable to the system performance, which enhances the use of offline simulation data and reduces the risk of model misspecification. A brief numerical experiment is presented to illustrate the performance of the proposed methods. Neural Predictive Intervals for Simulation Metamodeling Henry Lam (Columbia University) and Haofeng Zhang (Columbia University; IEOR Department, Columbia University) Abstract Abstract Simulation metamodeling refers to the use of lower-fidelity models to represent input-output relations with few simulation runs. Stochastic kriging, which uses Gaussian process and captures both aleatoric and epistemic uncertainties, is a versatile and predominant technique for such a task. However, this approach relies on specific model assumptions and could encounter scalability challenges. In this paper, we study an alternate metamodeling approach using neural-network-based input-output prediction intervals. We cast the metamodeling into an empirical constrained optimization framework to train the neural network that attains accurate prediction in terms of coverage and prediction interval width. We present a validation machinery and show how our method can enjoy a distribution-free finite-sample guarantee on the prediction performance. We demonstrate the superior performance of our method compared with other methods including stochastic kriging through some numerical examples. Information Consistency of Stochastic Kriging and Its Implications Yutong Zhang and Xi Chen (Virginia Tech) Abstract Abstract In this paper, we introduce the concept of information consistency for Bayesian Gaussian process models and further provide the information consistency results for stochastic kriging (SK). It is found that, to ensure information consistency of SK, the budget allocated should grow in a fashion that is commensurate with the smoothness level of the mean response function to estimate, as the number of design points approaches infinity. Moreover, it is recommended that an experiment design consist of a relatively large number of design points with a few replications at each when given a fixed budget to expend. Technical Session · MASM: Semiconductor Manufacturing [Virtual] Factory Operations 2 Chair: Reha Uzsoy (North Carolina State University) Machine Learning-based Periodic Setup Changes for Semiconductor Manufacturing Machines Je-Hun Lee and Hyun-Jung Kim (KAIST); Young Kim (Sungkyunkwan University, KAIST); Yun Bae Kim (Sungkyunkwan University); and Byung-Hee Kim and Gu-Hwan Chung (VMS Solutions) Abstract Abstract Semiconductor manufacturing machines, especially for photo-lithography processes, require large setup times when changing job types. Hence, setup operations do not often occur unless there is no job to be processed. In practice, a simulation-based method that predicts the incoming WIP is often used to determine whether changing machine setup states or not. The simulation-based method can provide useful information on the future production environment with a high accuracy but takes a long time, which can delay the setup change decisions. Therefore, this work proposes a machine learning-based approach that determines setup states of the machines. The proposed method shows better performance than several heuristic rules in terms of movement. Adaptive Rule Based Order Release In Semiconductor Manufacturing Philipp Neuner (University of Innsbruck) Abstract Abstract This paper analyzes two periodic order release mechanisms and their promising extensions by using a simulation model of a scaled-down wafer fabrication facility. One extends the Backward Infinite Loading (BIL) approach by dynamically adjusting lead times and considering safety lead times, and the other extends the COrrected aggregate Load Approach (COLA) by incorporating a dynamic time limit into its release procedure (Overload). Both are periodic approaches aiming at improving the timing performance and can react to the dynamics on the shop floor, where semiconductor manufacturing provides a very challenging environment. The results show that Overload outperforms all other mechanisms by yielding less total costs mainly due to a more balanced shop which results in the lowest WIP costs. Further, Overload reduces inventory costs compared to BIL and COLA. These results reinforce the finding of previous research that periodic rule based order release models are a viable alternative for semiconductor manufacturing. Operator Resource Planning in a Giga Fab During Covid-19 Restrictions Ching Foong Lee and Aik Ying Tang (Infineon Technologies (Kulim) Sdn Bhd), Georg Seidel (Infineon Technologies Austria AG), and Soo Leen Low and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd) Abstract Abstract Infineon’s Kulim wafer fab facility in Malaysia had limited operator availability caused by the MCO (Movement Control Order) for a period of 30 days during the COVID-19 pandemic. An existing and well validated Discrete Event Fab simulation model was extended with operator modelling, and was used to conduct case studies, evaluating the impact of different operator availability scenarios including work disruptions for several shifts within a week. The studies provide a guideline to the management to derive mitigation strategies, weighing the trade-off between cost and speed loss due to the limiting operator resources. Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Parcel Supply Chains Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities) Last-Mile Delivery of Pharmaceutical Items to Heterogeneous Healthcare Centers with Random Travel Times and Unpunctuality Fees Ángel A. Juan (Open University of Catalonia), Massimo Bertolini (University of Modena and Reggio Emilia), Javier Panadero (Open University of Catalonia), Mattia Neroni (University of Parma), and Erika Herrera (Open University of Catalonia) Abstract Abstract This paper analyzes a real-life distribution problem that is related to a pharmaceutical supplier in Spain. Every day, a fleet of vehicles has to deliver the previously requested items to a large set of pharmacies. The distribution has to be conducted with (i) the total distance and time incurred by the entire fleet being reasonably low and (ii) the time of the delivery meeting the specified time windows or, if that is not possible and some delays occur, the total fee incurred by these unpunctualities being minimized. Unpunctuality fees depend upon how important is the customer for the distributor, and the size of the tardiness gap. To include even more realistic details, travel times are modeled as random variables, which also makes the problem more challenging to solve by employing traditional optimization methods. To solve this stochastic variant of the problem, a simheuristic algorithm is proposed and evaluated. On-Demand Logistics Service For Packages: Package Bidding Mechanism vs. Platform Pricing Manish Tripathy, Ramin Ahmed, and Michael Kay (North Carolina State University) Abstract Abstract This paper is an exploratory analysis of an on-demand service platform for packages, where the packages bid for transportation service through various auction mechanisms, trucks offer transportation services, and distribution centers match demand and supply. All agents are independent and individually incentivized to participate. Using a utility-based model, we characterize the participation incentives for all the agents, implement the state-of-the-art pricing mechanisms from industry and academia, and design and implement a first-price auction-based mechanism. Using simulation and through performance indicators like throughput, profit of the distribution center, consumer surplus, among others, we find that the package bidding mechanism significantly outperforms the status quo. Furthermore, we extend our analysis to include uniform price and Vickrey-Clarke-Groves auctions. We find that the packages prefer the Vickrey-Clarke-Groves auction whereas the trucks and distribution centers prefer the first-price auction; although all of them prefer the bidding mechanism to the status-quo pricing mechanism. Combining Simulation with Reliability Analysis in Supply Chain Project Management under Uncertainty: A Case Study in Healthcare Marisa A. Lostumbo, Miguel Saiz, and Laura Calvet (Universitat Oberta de Catalunya); David Lopez-Lopez (ESADE business school); and Angel A. Juan (Universitat Oberta de Catalunya) Abstract Abstract Many projects involving supply networks can be logically represented by multiple processing paths. When the supply chain is working under deterministic conditions, computing the total time requested by each path is a trivial task. However, this computation becomes troublesome when processing times in each stage are subject to uncertainty. In this paper, we assume the existence of historical data that allow us to model each stage's processing time as a random variable. Then, we propose a methodology combining Monte Carlo simulation with reliability analysis in order to (i) estimate the project survival function and (ii) the most likely ‘bottleneck’ path. Identifying these critical paths facilitates reducing the project makespan by investing the available budget in improving the performance of some stages along the path, e.g., by modifying the transportation mode at one particular stage in order to speed up the process. A numerical example is employed to illustrate these concepts. From Logistics Process Models To Automated Integration Testing: Proof-of-Concept Using Open-Source Simulation Software Paul Konrad Reichardt and Wladimir Hofmann (Otto-von-Guericke-Universität), Sebastian Lang (Fraunhofer Institute for Factory Operation and Automation IFF), and Tobias Reggelin (Otto-von-Guericke-Universität) Abstract Abstract This paper explores the practical integration of simulation methods into software development processes. An automated integration testing approach is presented, which enables virtual commissioning. Following an analysis of the current state of knowledge and the standards of software development. A case study on logistics order management is presented, referring to a typical B2B application in the retail logistics sector. The proof-of-concept shows how the usage of a simulation model for automated integration testing and its inclusion into continuous integration can help to ensure software quality, particularly for process-centered logistics applications. The implemented setup proves the feasibility of the approach, using standard open-source development tools, and a Python-based open-source simulation library. Technical Session · Analysis Methodology [Virtual] Sampling Methodology and Reliability Chair: Dashi I. Singham (Naval Postgraduate School) Learning to Simulate Sequentially Generated Data via Neural Networks and Wasserstein Training Tingyu Zhu (Peking University) and Zeyu Zheng (University of California, Berkeley) Abstract Abstract We propose a new framework of a neural network-assisted sequential structured simulator to model, estimate, and simulate a wide class of sequentially generated data. Neural networks are integrated into the sequentially structured simulators in order to capture potential nonlinear and complicated sequential structures. Given representative real data, the neural network parameters in the simulator are estimated through a Wasserstein training process, without restrictive distributional assumptions. Moreover, the simulator can flexibly incorporate various kinds of elementary randomness and generate distributions with certain properties such as heavy-tail. Regarding statistical properties, we provide results on consistency and convergence rate for estimation of the simulator. We then present numerical experiments with synthetic and real data sets to illustrate the performance of our estimation method. Competing Incentives in Sequential Sampling Rules Dashi Singham (Naval Postgraduate School) and J. George Shanthikumar (Purdue University) Abstract Abstract We describe a framework in which two parties have competing objectives in estimating the mean performance of a stochastic system through sequential sampling. A "regulator" is interested in obtaining a correct measurement with an acceptable probability level, while the "stakeholder" in the project is interested in minimizing the cost associated with the output performance measure and the sampling cost. We demonstrate how the stakeholder can choose an optimal sampling rule to minimize their costs when they have private information which is not shared with the regulator. We further suggest how the regulator can choose a key controllable parameter of the optimal sampling rule in order to asymptotically approach a fair result for both parties. Measuring Reliability of Object Detection Algorithms for Automated Driving Perception Tasks Huanzhong Xu and Jose Blanchet (Stanford) and Marcos Paul Gerardo-Castro and Shreyasha Paudel (Ford Motors Co.) Abstract Abstract We build a data-driven methodology for the performance reliability and the improvement of sensor algorithms for automated driving perception tasks. The methodology takes as input three elements: I) one or various algorithms for object detection when the input is an image; II) a dataset of camera images that represents a sample from an environment, and III) a simple policy that serves as a proxy for a task such as driving assistance. We develop a statistical estimator, which combines I)-III) and a data augmentation technique, in order to rank the reliability of perception algorithms. Reliability is measured as the chance of collision given the speed of the ego vehicle and the distance to the closest object in range. We are able to compare algorithms in the (speed vs distance-to-closest-object) space using $p$-values and use this information to suggest improved-safety algorithms. Efficient Simulation for Linear Programming under Uncertainty Best Contributed Theoretical Paper - Finalist Dohyun Ahn and Lewen Zheng (The Chinese University of Hong Kong) Abstract Abstract We consider a problem of estimating the probability that the optimal value of a stochastic linear program exceeds a large threshold. Inspired by the classical theory of linear programming, we partition the sample space of random components so that the optimal value can be generated without solving a linear program for each sample. This enables us to develop an efficient importance sampling scheme for computing the said probability when the random components are jointly normal. We prove its asymptotic efficiency under the regime where the threshold increases. Our numerical experiments reveal that the proposed method significantly outperforms the existing simulation techniques in the literature. Technical Session · Data Science for Simulation [Virtual] DSS 2 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) ExecutionManager: A Software System to Control Execution of Third-Party Software that Performs Network Computations Chris J. Kuhlman, Henry Carscadden, Lucas Machi, Aparna Kishore, Dustin Machi, and S. S. Ravi (University of Virginia) Abstract Abstract We describe a software system called ExecutionManager (abbreviated EM) that controls the execution of third-party software (TPS) for analyzing networks. Based on a configuration file that contains a specification for the execution of each TPS, the system launches any number of stand alone TPS codes, if the projected execution time and the graph size are within user-imposed limits. A system capability is to estimate the running time of a TPS code on a given network through regression analysis, to support execution decision-making by EM. We demonstrate the usefulness of EM in generating network structure parameters and distributions, and in extracting meta-data information from these results. We evaluate its performance on directed and undirected, simple and multi-edge graphs that range in size over seven orders of magnitude in numbers of edges, up to 1.5 billion edges. The software system is part of a cyberinfrastructure called net.science for network science. AutoML Approach to Classification of Candidate Solutions for Simulation Models of Logistic Systems Ilya Jackson (Massachusetts Institute of Technology, Transport and Telecommunication Institute) and Josué C. Velázquez-Martínez (Massachusetts Institute of Technology) Abstract Abstract This paper proposes a framework based on automated machine learning for the classification of candidate solutions for logistic and industrial discrete event simulation models. The proposed framework aims to augment the capabilities of a decision-maker to make critical yes-no decisions quickly with confidence and nearly perfect accuracy. The framework is based on a combination of artificial neural networks and a genetic algorithm. Such that the genetic algorithm orchestrates both neural architecture search and hyperparameter optimization. The paper demonstrates that the classifiers obtained through the proposed procedure can learn and generalize complex nonlinear relations within the discrete-event simulation models of subsystems vitally crucial for food supply chains and classify a candidate solution as profitable or not. Ride Along: Using Simulation to Staff the Ithaca Police Department Jamol Pender, Christopher Archer, and Matthew Ziron (Cornell University) Abstract Abstract In this work, we combine data science, queueing theory and stochastic simulation into an analytical framework to examine the operations of the Ithaca Police Department (IPD) and develop novel solutions that address recent staffing shortages of the IPD. With 3 years of IPD call data, we construct and simulate a novel multiple-dispatch queueing process for modeling the IPD's operations. Unfortunately, using the data alone does not capture the real dynamics of IPD's operations, which made it necessary to augment the call data with qualitative assessments from police ride-alongs. Our analysis augmented with ride-along assessments accurately replicates metrics such as emergency response time and officer utilization. Although it is difficult to fully capture the claimed strains on the department through data alone, hiring additional officers should make an observable improvement in the performance of the IPD. Technical Session · Healthcare Applications [Virtual] Applications of Simulation in Healthcare II Chair: Iman Attari (Indiana University Bloomington) A Simulation Analysis of Analytics-driven Community-based Re-integration Programs Iman Attari (Indiana University Bloomington), Parker Adam Crain and Pengyi Shi (Purdue University), Jonathan Eugene Helm (Indiana University Bloomington), and Nicole Adams (Purdue University) Abstract Abstract We develop a data-driven simulation model in partnership with Tippecanoe County Community Corrections to evaluate assignment policies of reintegration programs. These programs are intended to help clients with their transition back to society after release, with the goal of ending the ``revolving door of recidivism.'' Leveraging client-level and system-level data, we develop a queueing-based network model to capture the movement of clients in the system. We integrate a personalized recidivism prediction to capture heterogeneous risks, along with estimated effects of reintegration programs from literature. Using simulation, we find that the largest benefit is achieved by implementing any kind of re-integration program, regardless of assignment policy, as the savings in the societal and re-incarceration costs (from recidivism) outweigh program costs. Assignment policy based on predictive analytics achieves a 1.5-time larger reduction in recidivism compared to current practice. In expanding capacity, greater consideration should be given to investing in analytics-driven program assignments. Measuring the Impact of Data Standards in an Internal Hospital Supply System Manuel Rossetti, Edward Pohl, and Kayla McKeon (University of Arkansas) and Rodney Kizito (U. S. Department of Energy) Abstract Abstract Within a healthcare supply chain, the ability to trace and track patient care items through the use of unique identifiers has been shown to be a best practice. Data standardization through the use of Global Trade Item Numbers (GTIN) and Global Location Numbers (GLN) has helped hospitals improve patient safety and medical cost reimbursements. Despite these proven benefits the full adoption of data standards across all items throughout the industry has been slow to be achieved. This paper uses simulation to illustrate the potential benefits within a hospital supply system in terms of the improved management of items that can expire. By reducing the time and cost associated within expiration management, hospitals can show the benefits of data standards and justify the efforts to move towards a full adoption of data standards. Technical Session · Modeling Methodology [Virtual] Tools and Environments Chair: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET) The OpenModelica Environment for Building Digital Twins of Sustainable Cyber-Physical Systems Peter Fritzson (Linköping University) Abstract Abstract The Modelica modeling language and technology is being warmly received by the world community in modeling and simulation with major applications in virtual prototyping and digital twins of complex cyber-physical systems, which mix physical system dynamics with software (cyber) and networks. It is enabling a revolution in this area, based on its ease of use, visual design of models with combination of lego-like predefined model building blocks, its ability to define model libraries with reusable components, its support for modeling and simulation of complex applications involving parts from several application domains, and many more useful facilities. Adoption is further strengthened by the freely available open source OpenModelica environment for building digital twins and virtual prototypes as well as system analysis and optimization, especially relevant in transforming society into sustainability including applications in renewable energy and fossil-free transportation. This paper gives an overview of this technology as well as some applications. A SIMULATION TOOL TO PROVIDE ALTERNATIVE PRODUCTS IN OUT-OF-STOCK SITUATIONS FOR B2B COMPANIES Sebastian Pinto Guzman (USACH), Veronica Gil-Costa (UNSL), and Mauricio Marin (USACH) Abstract Abstract With the rise of online sales and services dedicated to advertising aimed at the profiles of online users, companies selling general products need to adapt their business rules to improve sales strategies. In this paper, we present a simulation tool to support business decision-making, and improve the efficiency of an e-commerce system. It allows testing different sales strategies within the context of Business-to-Business models which focuses on the purchase and sale of non-strategic products. The idea is to provide the client's company with different alternative products when the desired product is out of stock. We propose and implement in our simulation tool, three recommendation algorithms, based on sales parameters such as the price and the popularity of products and the similarity between the recommended product and the one out-of-stock. The proposed algorithms are tested with real datasets, which allows evaluating the performance of the recommendation algorithms and their effectiveness. Parallel Application Power and Performance Prediction Modeling Using Simulation Kishwar Ahmed (University of South Carolina Beaufort), Kazutomo Yoshii (Argonne National Laboratory), and Samia Tasnim (Florida A&M University) Abstract Abstract High performance computing (HPC) system runs compute-intensive parallel applications requiring large number of nodes. An HPC system consists of heterogeneous computer architecture nodes, including CPUs, GPUs, field programmable gate arrays (FPGAs), etc. Power capping is a method to improve parallel application performance subject to variable power constraints. In this paper, we propose a parallel application power and performance prediction simulator. We present prediction model to predict application power and performance for unknown power-capping values considering heterogeneous computing architecture. We develop a job scheduling simulator based on parallel discrete-event simulation engine. The simulator includes a power and performance prediction model, as well as a resource allocation model. Based on real-life measurements and trace data, we show the applicability of our proposed prediction model and simulator. Technical Session · Modeling Methodology [Virtual] Applications in engineering and social systems Chair: Abdurrahman Alshareef (Arizona State University) Simulation Case Studies On An Advanced Sensitivity Analysis For New Extended Bus Types In The Modern Power Systems Zongjie Wang (University of Connecticut) and C.L. Anderson (Cornell University) Abstract Abstract A simulation analysis in power system planning and operations considers the sensitivity of power flow, stability, and security in response to changes in system states. Traditionally, these analyses have the power system defined by three bus-types. As the contribution from renewable sources increases along with proliferation of new electronic control devices, traditional bus-types are no longer sufficient. Reliance on the traditional bus-types for sensitivity analysis may not guarantee simulation accuracy and may lead to poor decisions, thereby jeopardizing system reliability and stability. To address this issue, this paper proposes new bus-types that arise in modern power systems, and determines the corresponding sensitivity calculation formulas based equations of new bus-types. The updated sensitivity analysis is implemented on two simulation case studies; the IEEE 14-bus test system, and a 262 bus system. Results show that traditional bus-types lead to significant error, whereas the extended bus-types produce more accurate results. Data-driven Exploration of Lentic Water Bodies with ASVs Guided by Gradient-free Optimization/Contour Detection Algorithms Eva Besada-Portas, José María Girón-Sierra, Juan Jiménez, and José Antonio López-Orozco (Complutense University of Madrid) Abstract Abstract This paper presents a local-path planner for water quality monitoring involving an Autonomous Surface Vehicle (ASV). The planner determines new measuring waypoints based on the information collected so far, and on two gradient-free optimization and contour-detection algorithms. In particular, the optimization algorithm generates the locations where the variable/substance under study must be measured and use them as the waypoints of the external loop of the Guidance, Navigation and Control system of our ASV. Besides, the contour algorithm obtains useful waypoints to determine the water body locations where the variable/substance under study reaches a given value. The paper also analyzes how the approach works via progressive simulations over an ASV carefully modelled with a set of non-linear differential equations. Preliminary results suggest that the approach can be useful in real-world single-ASV water-quality monitoring missions where there is not previous knowledge of the state and location of the variable/substance under study. Simulating Online Social Response: a Stimulus/response Perspective Huajie Shao and Tarek Abdelzaher (UIUC); Sam Cohen and James Flamino (RPI); Jiawei Han and Minhao Jiang (UIUC); Gyorgy Korniss, Omar Malik, and Aamir Mandviwalla (RPI); Yuning Mao, Yu Meng, Wenda Qiu, and Dachun Sun (UIUC); Boleslaw Szymanski (RPI); Ruijie Wang, Chaoqi Yang, and Zhenzhou Yang (UIUC); Lake Yin (RPI); and Xinyang Zhang and Yu Zhang (UIUC) Abstract Abstract The paper describes a methodology for simulating online social media activities that occur in response to external events. A large number of social media simulators model information diffusion on online social networks. However, information cascades do not originate in vacuum. Rather, they often originate as a {\em reaction to events\/} external to the online medium. Thus, to predict activity on the social medium, one must investigate the relation between external stimuli and online social responses. The paper presents a simulation pipeline that features stimulus/response models describing how social systems react to external events of relevance to them. Two case studies are presented to test the fidelity of different models. One investigates online responses to events in the Venezuela election crisis. The other investigates online responses to developments of the China Pakistan Economic Corridor (CPEC). These case studies indicate that simple macroscopic stimulus/response models can accurately predict aggregate online trends. Technical Session · Manufacturing Applications [Virtual] MA 1 Chair: Alexandru Rinciog (TU Dortmund University) Simulation of stochastic rolling horizon forecast behavior with applied outlier correction to increase forecast accuracy Wolfgang Seiringer (University of Applied Sciences Upper Austria), Klaus Altendorfer (Upper Austrian University of Applied Science), and Thomas Felberbauer (University of Applied Sciences St. Pölten) Abstract Abstract A two-stage supply chain is studied in this paper where customers provide demand forecasts to a manufacturer and update these forecasts on a rolling horizon basis. Stochastic forecast errors and a forecast bias, both related to periods before delivery, are modeled. Practical observations show that planning methods implemented in ERP (enterprise resource planning) systems often lead to instabilities in production plans that temporarily increase projected demands. From the manufacturer’s point of view, this behavior is observed as an outlier in the demand forecast values. Therefore, two simple outlier correction methods are developed and a simulation study is conducted to evaluate their performance concerning forecast accuracy. In detail, the magnitude of each demand forecast is evaluated and if a certain threshold is reached, the forecast is corrected. The study shows that the application of the outlier correction for forecast values leads to significant forecast accuracy improvement if such planning instabilities occur. A Biased-Randomized Discrete-Event Heuristic for the Permutation Flow Shop Problem with Multiple Paths Angel Juan (Universitat Oberta de Catalunya), Christoph Laroque (University of Applied Sciences Zwickau), Javier Panadero and Pedro Copado (Universitat Oberta de Catalunya), Madlene Leissau (University of Applied Sciences Zwickau), and Christin Schumacher (Technical University of Dortmund) Abstract Abstract Based on a real-life use-case, this paper discusses a manufacturing scenario where different jobs need to be processed by a series of machines. Each job type has to follow a pre-defined route in the hybrid flow shop. In addition, the aggregation of jobs in batches might be required at several points. This process can be modeled as a hybrid flow shop problem with additional restrictions. The objective is to find a permutation of jobs minimizing the makespan. In order to obtain high-quality solutions to the problem, the discrete-event simulation needs to be combined with an optimization component. Hence, we propose the use of a discrete-event heuristic. When combined with biased-randomized techniques, our approach is able to find solutions that significantly outperform those provided by employing simulation only. Moreover, the proposed methodology can be easily extended into a simheuristic, so that random processing times could be also considered. Optimal Minimal-Contact Routing of Randomly Arriving Agents Through Connected Networks Diptangshu Sen, Prasanna Ramamoorthy, and Varun Ramamohan (Indian Institute of Technology Delhi) Abstract Abstract Collision-free or contact-free routing through connected networks has been actively studied in the industrial automation and manufacturing context. Contact-free routing of personnel through connected networks (e.g., factories, retail warehouses) may also be required in the COVID-19 context. In this context, we present an optimization framework for identifying routes through a connected network that eliminate or minimize contacts between randomly arriving agents needing to visit a subset of nodes in the network in minimal time. We simulate the agent arrival and network traversal process, and introduce stochasticity in travel speeds, node dwell times, and compliance with assigned routes. We present two optimization formulations for generating optimal routes - no-contact and minimal-contact - on a real-time basis for each agent arriving to the network given the route information of other agents already in the network. We generate results for the time-average number of contacts and normalized time spent in the network. Fabricatio-RL: A Reinforcement Learning Simulation Framework for Production Scheduling Alexandru Rinciog and Anne Meyer (TU Dortmund University) Abstract Abstract Production scheduling is the task of assigning job operations to processing resources such that a target goal is optimized. constraints on job structure and resource capabilities, including stochastic influences, such as job arrivals, define individual problems. Reinforcement learning (RL) solvers are adaptive and potentially robust in highly stochastic settings. However, benchmarking RL solutions for stochastic problems is challenging, requiring the simulation of complex production settings while guaranteeing reproducible stochasticity. No such simulation is currently available. To cover this gap, we introduce FabricatioRL, an RL compatible, customizable and extensible benchmarking simulation framework. Our contribution is twofold: We first derive requirements to ensure that generic production setups can be covered, the simulation framework can interface with both traditional approaches and RL, and experiments are reproducible. Then, we detail the FabricatioRL design and implementation satisfying the obtained requirements in terms of framework input, core simulation process, and the interface with different scheduling systems. Vendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Agent based models for Tracking the Spread of Covid-19 Chair: Esteban Lanzarotti (DC-ICC, UBA-CONICET) High Performance Agent-Based Modeling to Study Realistic Contact Tracing Protocols Stefan Hoops, Jiangzhuo Chen, Abhijin Adiga, Bryan Lewis, and Henning Mortveit (Biocomplexity Institute & Initiative); Justin Crow, Elena Diskin, Seth Levine, Helen Tazelaar, Brooke Rossheim, and Chris Ghaemmaghami (Virginia Department of Health); Carter Price (RAND); Hannah Baek (Biocomplexity Institute & Initiative); Rebecca Early (Virginia Department of Health); and Mandy Wilson, Dawen Xie, Samarth Swarup, Srinivasan Venkatramanan, Christopher Barrett, and Madhav V. Marathe (Biocomplexity Institute & Initiative) Abstract Abstract Contact tracing (CT) is an important and effective intervention strategy for controlling an epidemic. Its role becomes critical when pharmaceutical interventions are unavailable. Manual CT is resource intensive, and multiple protocols are possible, therefore the ability to evaluate possible strategies is important. A Multi-aspect Agent-based Model of Covid-19: Disease Dynamics, Contact Tracing Interventions and Shared Space-driven Contagions Esteban Lanzarotti (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)); Francisco Roslan and Leandro Groisman (Departamento de Computación, FCEyN-UBA); and Lucio Santi and Rodrigo Castro (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)) Abstract Abstract In the quest to better understand the epidemic dynamics of COVID-19 and possible strategies to mitigate its impact, a wide range of simulation models have been developed for various purposes. Faced with a novel disease with little-known characteristics and an unprecedented impact, the need arises to model multiple aspects with very dissimilar dynamics in a consistent, formal, yet flexible and quick way, in order to then study the combined interaction of these dynamics. We present an agent-based model combining kinematic movement of agents, interaction between them and their surrounding space and a top-down control over the entire population. To achieve this, we extend the retQSS framework to model and simulate particle systems interacting with geometries. In this work, we study different contact tracing strategies and their efficacy in a population undergoing an epidemic process driven mainly by airborne infections in indoor environments. Simulating SARS-CoV-2 Transmission in the NYC Subway Alexander J. Washburn and Ye W. Paing (Hunter College CUNY), Pauline Lin (University of Melbourne), and Felisa J. Vazquez-Abad (CUNY) Abstract Abstract The impact of the spread of the virus on public transport has been a topic of disagreement. Some sources report that travel times are so small that contagion is insignificant, while others claim that the NYC subway was a major contributor to the spread of the virus in 2020. This study addresses this question. While there is an enormous amount of data, it is impossible to know when people got infected. Our approach is to use a model for the virus transmission to simulate contagion during travel, while using data from the turnstiles to make our simulation scenarios realistic. We combine the ghost model for train dynamics and a stopped Continuous Time Markov Chain (CTMC) for the virus transmission to create a hybrid simulation. Preliminary results indicate that our simulation tool may provide accurate answers to the question. In particular, it helps analyze the resulting risk under different transportation policies. Technical Session · MASM: Semiconductor Manufacturing [Virtual] MASM 2 Chair: Abdelgafar Hamed (Infineon Technologies AG) Simulation Model Simplification For Changing Product Mix Scenario Igor Stogniy (TU Dresden), Wolfgang Scholl (Infineon Technologies Dresden GmbH), and Hans Ehm (Infineon Technologies AG) Abstract Abstract Infineon Technologies Dresden has long used simplified simulation models to optimize production planning. However, the simplification is based on the gut feeling of experts who do not have time to analyze the various concepts in detail. In this paper, a detailed analysis of the simulation model simplification by substituting operations for constant delays was performed under conditions close to the real world. A statistical model was developed to calculate the delay values. The simplification results based on the statistical model are compared with the results based on the detailed model. The experiments were carried out based on the MIMAC dataset 5 model. On Scheduling A Photolithography Toolset Based On A Deep Reinforcement Learning Approach With Action Filter Taehyung Kim and Hyeongook Kim (KAIST), James Robert Morrison (Central Michigan University), Eungjin Kim (Samsung Display Company), and Tae-eog Lee (KAIST) Abstract Abstract Production scheduling of semiconductor manufacturing tools is a challenging problem due to the complexity of the equipment and systems in modern wafer fabs. In our study, we focus on the photolithography toolset and consider it as a non-identical parallel machine scheduling problem with random lot arrivals and auxiliary resource constraints. The proposed methodology strives to learn a near optimal scheduling policy by incorporating WIP, masks, and the tardiness of jobs. An Action Filter (AF) is proposed as a methodology to eliminate illogical actions and speed the learning process of agents. The proposed model was evaluated in a simulation environment inspired by practical photolithography scheduling problems across various settings with reticle and qualification constraints. Our experiments demonstrated improved performance compared to typical rule-based strategies. Relative to our learning methods, weighted shortest processing time (WSPT) and apparent tardiness cost with setups (ATCS) rules perform $28\%$ and $32\%$ worse for weighted tardiness, respectively. Identifying Potentials and Impacts of Lead-Time Based Pricing in Semiconductor Supply Chains with Discrete-Event Simulation Tobias Leander Welling, Ludmila Quintao Noel Gomes De Carvalho, and Abdelgafar Ismail (Infineon Technologies AG) Abstract Abstract Due to a significant increase in demand fluctuations in the semiconductor industry triggered by the bullwhip effect, global supply chains are experiencing an unprecedented pressure. The industry specific characteristics of short product life cycles, long lead times and a highly competitive market environment are further decreasing flexibility, although a robust and adjustable supply chain is required. In order to enable greater flexibility, this study investigates the hypothesis that Revenue Management is able to offer greater fulfilment of customer expectations, while at the same time initiating a revenue increase in the semiconductor industry. We tested this hypothesis in a discrete-event simulation based on a case study obtained from a semiconductor company. The study indicates that global supply chains, such as the ones in the semiconductor industry, should use Revenue Management methods in order to increase their revenue by 10% to 19% as well as to improve flexibility and customer satisfaction. Technical Session, Introductory Tutorial · Introductory Tutorials [Virtual] A tutorial on Participative Discrete Event Simulation in the virtual workshop environment Chair: Hossein Piri (University of British Columbia, Sauder School of Business) A tutorial on Participative Discrete Event Simulation in the virtual workshop environment Antuela Tako (Loughborough University) and Kathy Kotiadis (University of Kent) Abstract Abstract Facilitated discrete event simulation offers an alternative mode of engagement with stakeholders (clients) in simulation projects. Pre-covid19 this was undertaken in face-to-face workshops but the new reality has meant that this is no longer possible for many of us around the globe. This tutorial explores PartiSim, short for Participative Simulation, as adapted to fit the new reality of holding virtual workshops with stakeholders. PartiSim is a participative and facilitated modelling approach developed to support simulation projects through a framework, stakeholder-oriented tools and manuals in facilitated workshops. We describe a typical PartiSim study consisting of six stages, four of which involve facilitated workshops and how it can be undertaken in a virtual workshop environment. We have developed games to provide those attending the tutorial with the experience of virtual facilitation. Commercial Case Study · Commercial Case Studies [Virtual] Productivity and Manufacturing Chair: Devdatta Deo (Simio LLC) Using Simulation And Machine Learning To Innovate Long-term Production And Operational Planning Rie Gaku (Momoyama Gakuin University); Louis Luangkesorn (Highmark Health); and Soemon Takakuwa (Chuo University, Nagoya University) Abstract Abstract In this study, machine learning and simulation technologies are applied to forecast future egg production operations on the basis of production cycles of egg laying hens. Internal business data and related historical external economic data are used to provide information that is more accurate for decision-making in production and operational planning, extending until the end of the production cycle. The key performance indicators required for long-term production planning, including average fodder costs, production output, and total sales of a production cycle, are designed and collated to help long-term production planning and operation, including fodder procurement and egg marketing strategies as well as financial decisions for each poultry production cycle. Improving the check-in processes of an airline company in an airport terminal Jaime Sotomayor and Alicia García-Hernández (baobab soluciones) and Alvaro Garcia-Sanchez (Universidad Politénica de Madrid, baobab soluciones) Abstract Abstract This case study consisted in the analysis and improvement of Iberia’s check-in processes in the Adolfo Suárez-Madrid airport T4-terminal. A simulation model was developed on Simio for this purpose and different alternatives were analyzed. This study was carried out by baobab soluciones. Applied Productivity AI/ML Platform Based Lot Cycle Time Prediction in Semiconductor Manufacturing Madhav Kidambi and Jeong Cheol Seo (Applied Materials) Abstract Abstract Applied Productivity AI/ML platform is used to predict lot cycle time in semiconductor manufacturing fab. Artificial Intelligence(AI) and Machine Learning(ML) are disrupting Manufacturing Industry in several areas by augmenting engineers’ efforts by providing predictions like lot cycle time and analytics like dynamic bottleneck. This platform support an end-to-end process at enterprise level scale for data engineers and ML/IE engineers to deliver a ML module/function in their production system. In this case study, General process flow of how to use this platform to deploy a ML function is introduced. Accurate prediction of cycle time (CT) plays an important role in the promises of a good delivery-time for semiconductor manufacturers. How to use this platform to build a ML model to predict lot cycle time and deploy this ML model into the production environment in semiconductor manufacturing fab is also explained. This case study shows the effectiveness and efficiency of this platform. Vendor · Vendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Model Uncertainty and Robust Simulation [Virtual] Simulation Optimization, Prediction, and Estimation Chair: Ilya Ryzhov (University of Maryland) Efficient Black Box Importance Sampling for VaR and CVaR Estimation Anand Deo and Karthyek Murthy (Singapore University of Technology and Design) Abstract Abstract This paper considers efficient Importance Sampling (IS) for the estimation of tail risks of a loss defined in terms of a sophisticated object such as a machine learning predictor or a mixed-integer linear optimization formulation. Assuming only black-box access to the loss and the distribution of the underlying random vector, the paper presents an efficient IS algorithm for estimating the Value at Risk and Conditional Value at Risk. The key challenge in any IS procedure, namely, identifying an appropriate change-of-measure, is automated with a self-structuring IS transformation that learns and replicates the concentration properties of the conditional excess from less rare samples. The resulting estimators are enjoy asymptotically optimal variance reduction when viewed in the logarithmic scale. Simulation experiments, both on synthetic and real datasets, highlight the efficacy and practicality of the proposed IS scheme. Dynamic Sampling Policy for Subset Selection Gongbo Zhang and Yijie Peng (Peking University), Jianghua Zhang (Shandong University), and Enlu Zhou (Georgia Institute of Technology) Abstract Abstract We consider a problem of selecting a subset of a finite number of competing alternatives via Monte Carlo simulation, where the subset contains the top-m alternatives. Under the Bayesian framework, we develop a dynamic sampling policy to efficiently learn and select the top-m alternatives. The proposed sampling policy is proved to be consistent, i.e., the selected alternatives will be the true top-m alternatives as the number of simulation budget goes to infinity. Numerical results show that the proposed sampling policy outperforms the existing ones. Short-Term Adaptive Emergency Call Volume Prediction Elioth Sanabria, Henry Lam, Enrique Lelo de Larrea, and Jay Sethuraman (Columbia University); Edward Dolan, Nicholas Johnson, and Timothy Kepler (FDNY); Sevin Mohammadi and Audrey Olivier (Columbia University); Afsan Quayyum (FDNY); Andrew Smyth (Columbia University); and Kathleen Thomson (FDNY) Abstract Abstract Sudden periods of extreme and persistent changes in the distribution of medical emergencies can trigger resource planning inefficiencies for Emergency Medical Services, causing delayed responses and increased waiting times. Predicting such changes and reacting adaptively can alleviate these adversarial impacts. In this paper, we propose a simple framework to enhance historically calibrated call volume models, the latter a focus of study in the arrival estimation literature, to give more accurate short-term prediction by refitting their residuals into time series. We discuss some justification of our framework from the perspective of doubly stochastic Poisson processes. We illustrate our methodology in predicting the hourly call volume to the 911 call center during the Covid-19 pandemic in NYC, showing how it could improve the performance of baseline historical estimators by close to 50% measured by the out-of-sample prediction error for the next hour. Estimating a Conditional Expectation with the Generalized Likelihood Ratio Method Yi Zhou, Michael Fu, and Ilya Ryzhov (University of Maryland) Abstract Abstract In this paper, we consider the problem of efficiently estimating a conditional expectation. By formulating the conditional expectation as a ratio of two derivatives, we can apply the generalized likelihood ratio method to express the conditional expectation using ordinary expectations with indicator functions, which generalizes the conditional density method. Based on an empirical distribution estimated from simulation, we provide guidance on selecting the appropriate formulation of the derivatives to reduce the variance of the estimator. Technical Session · Simulation Optimization [Virtual] Applications Chair: Jeff Hong (City University of Hong Kong) Recursive Midpoint Search for Line of Sight Paul Francis Evangelista and Vikram Mittal (USMA) Abstract Abstract Line of sight (LoS) calculation and LoS analytics support a wide variety of applications, particularly simulations that involve interactions of entities across simulated terrain. This research proposes a LoS algorithm that recursively searches the midpoints of array segments and shows significant efficiency over a naive linear search. The algorithm casts every LoS query into an array of length $2^n$, enabling precise and complete indexing of an array ordered by the recursive midpoints of the array. This method samples the array with broadness and symmetry. Experimental results demonstrate significant efficiencies when compared to linear search approaches for LoS. This search algorithm has the potential to apply more broadly to any situations that benefit from the binary search of unknown data that may be autocorrelated. A Simulation Driven Optimization Algorithm for Scheduling Sorting Center Operations Supratim Ghosh, Aritra Pal, Prashant Kumar, Ankush Ojha, Aditya Avinash Paranjape, Souvik Barat, and Harshad Khadilkar (Tata Consultancy Services Ltd) Abstract Abstract Parcel sorting operations in logistics enterprises aim to achieve a high throughput of parcels through sorting centers. These sorting centers are composed of large circular conveyor belts on which incoming parcels are placed, with multiple arms known as chutes for sorting the parcels by destination, followed by packing into roller cages and loading onto outbound trucks. Modern sorting systems need to complement their hardware innovations with sophisticated algorithms and software to map destinations and workforce to specific chutes. While state of the art systems operate with fixed mappings, we propose an optimization approach that runs before every shift, and uses real-time forecast of destination demand and labor availability in order to maximize throughput. We use simulation to improve the performance and robustness of the optimization solution to stochasticity in the environment, through closed-loop tuning of the optimization parameters. Option Pricing By Neural Stochastic Differential Equations: A Simulation-Optimization Approach Shoudao Wang and Jeff Hong (Fudan University) Abstract Abstract Classical option pricing models rely on prior assumptions made on the dynamics of the underlying assets. While empirical evidence showed that these models may partially explain the option prices, their performance may be poor when the actual situation deviates from the assumptions. Neural network models are capable of learning the underlying relationship through the data. However, to avoid over-fitting, these models require massive amount of data, which are not available for option pricing problems. We propose a new model by integrating neural networks to a classical option pricing model, thus increasing the model flexibility while requiring a reasonable amount of data. We show that the training of the model, also known as the calibration, may be formulated into a simulation optimization problem, and it may be solved in a way that is compatible to the training of neural networks. Preliminary numerical results show that our approach appears to work well. Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Modeling the Spread of COVID-19 Chair: Glenn Davidson (Carleton University) Studying Covid-19 Spread Using a Geography Based Cellular Model Glenn Davidson and Gabriel Wainer (Carleton University) Abstract Abstract Infectious disease models are in widespread use among governments and health agencies to plan COVID-19 public health policies, and their accuracy is of upmost importance to public health. Common approaches to modeling infectious diseases include compartmental differential equation models and Cellular Automata, both of which are difficult to use in predicting the spread of disease over several geographical regions, as they often oversimplify the geographical features they are meant to model. A geography-based Cell-DEVS approach to modelling pandemics is presented. The compartmental model presented considers additional factors such as movement restriction effects, disease incubation, population disobedience to public health policies, and a dynamic fatality rate. The model offers deterministic predictions for any number of regions simultaneously and can be easily adapted to unique geographical areas. Informing University Covid-19 Decisions Using Simple Compartmental Models Benjamin Hurt, Aniruddha Adiga, Madhav Marathe, and Christopher L. Barrett (University of Virginia) Abstract Abstract Tracking the COVID-19 pandemic has been a major challenge for policy makers. Although several efforts are ongoing for accurate forecasting of cases, deaths, and hospitalization at various resolutions, few have been attempted for college campuses despite their potential to become COVID-19 hot-spots. In this paper, we present a real-time effort towards weekly forecasting of campus-level cases during the fall semester for four universities in Virginia, United States. We discuss the challenges related to data curation. A causal model is employed for forecasting with one free time-varying parameter, calibrated against case data. The model is then run forward in time to obtain multiple forecasts. We retrospectively evaluate the performance and, while forecast quality suffers during the campus reopening phase, the model makes reasonable forecasts as the fall semester progresses. We provide sensitivity analysis for the several model parameters. In addition, the forecasts are provided weekly to various state and local agencies. Travel Cadence and Epidemic Spread Lauren Streitmatter (University of Toronto) and Peter Zhang (Carnegie Mellon University) Abstract Abstract In this paper, we study how interactions between populations impact epidemic spread. We extend the classical SEIR model to include both integration-based disease transmission simulation and population flow. Our model differs from existing ones by having a more detailed representation of travel patterns, without losing tractability. This allows us to study the epidemic consequence of inter-regional travel with high fidelity. In particular, we define travel cadence as a two-dimensional measure of inter-regional travel, and show that both dimensions modulate epidemic spread. This technical insight leads to policy recommendations, pointing to a family of simple policy trajectories that can effectively curb epidemic spread while maintaining a basic level of mobility. Invited Paper, Contributed Paper, Technical Session · Analysis Methodology [Virtual] Estimation Methodology 2 Chair: Yijie Peng (George Mason University) Variance Reduction for Generalized Likelihood Ratio Method in Quantile Sensitivity Estimation Yijie Peng (Peking University), Michael Fu (University of Maryland), Hu Jiaqiao (State University of New York), Pierre L'Ecuyer (University of Montreal), and Bruno Tuffin (INRIA) Abstract Abstract We apply the generalized likelihood ratio (GLR) methods in Peng et al. (2018) and Peng et al. (2021) to estimate quantile sensitivities. Conditional Monte Carlo and randomized quasi-Monte Carlo methods are used to reduce the variance of the GLR estimators. The proposed methods are applied to a toy example and a stochastic activity network example. Numerical results show that the variance reduction is significant. Calibration Using Emulation of Filtered Simulation Results Ozge Surer and Matthew Plumlee (Northwestern University) Abstract Abstract Calibration of parameters in simulation models is necessary to develop sharp predictions with quantified uncertainty. A scalable method for calibration involves building an emulator after conducting an experiment on the simulation model. However, when the parameter space is large, meaning the parameters are quite uncertain prior to calibration, much of the parameter space can produce unstable or unrealistic simulator responses that drastically differ from the observed data. One solution to this problem is to simply discard, or filter out, the parameters that gave unreasonable responses and then build an emulator only on the remaining simulator responses. In this article, we demonstrate the key mechanics for an approach that emulates filtered responses but also avoids unstable and incorrect inference. These ideas are illustrated on a real data example of calibrating COVID-19 epidemiological simulation model. On Constructing Confidence Region for Model Parameters in Stochastic Gradient Descent via Batch Means Yi Zhu (WeRide Corp) and Jing Dong (Columbia University) Abstract Abstract We study an easy-to-implement algorithm to construct asymptotically valid confidence regions for model parameters in stochastic gradient descent. The main idea is to cancel out the covariance matrix which is hard/costly to estimate using the batch means method with a fixed number of batches. In developing the algorithm, we establish a process-level functional central limit theorem for Polyak-Ruppert averaging iterates. We also extend the batch means method to accommodate more general batch size specifications. Technical Session · Model Uncertainty and Robust Simulation [Virtual] Data-driven Simulation Optimization Chair: Hoda Bidkhori (University of Pittsburgh) Data-driven Two-stage Stochastic Programming with Marginal Data Ke Ren and Hoda Bidkhori (University of Pittsburgh) Abstract Abstract We present new methodologies to solve data-driven two-stage stochastic optimization when only the marginal data are available. We propose a novel data-driven distributionally robust framework that only uses the available marginal data. The proposed model is distinguished from the traditional techniques of solving missing data in that it conducts an integrated analysis of missing data and optimization problems, whereas classical methods conduct separate analyses by first recovering the missing data and then finding the optimal solutions. On the theoretical side, we show that our model produces risk-averse solutions and guarantees finite sample performance. Empirical experiments are conducted on two applications based on synthetic data and real-world data. We validate the proposed finite sample guarantee and show that the proposed approach achieves better out-of-sample performance and higher reliability than the classical data imputation-based approach. A Bayesian Approach to Online Simulation Optimization with Streaming Input Data Tianyi Liu, Yifan Lin, and Enlu Zhou (Georgia Institute of Technology) Abstract Abstract We consider simulation optimization under input uncertainty, where the unknown input parameter is estimated from streaming data arriving in batches over time. Moreover, data may depend on the decision of the time when they are generated. We take an online approach to jointly estimate the input parameter via Bayesian posterior distribution and update the decision by applying stochastic gradient descent (SGD) on the Bayesian average of the objective function. We show the convergence of our approach. In particular, our consistency result of Bayesian posterior distribution with decision-dependent data might be of independent interest to Bayesian estimation. We demonstrate the empirical performance of our approach on a simple numerical example. Distributionally Robust Cycle and Chain Packing with Application to Organ Exchange Best Contributed Theoretical Paper - Finalist Duncan C. McElfresh (University of Maryland), Ke Ren (University of Pittsburgh), John P. Dickerson (University of Maryland), and Hoda Bidkhori (University of Pittsburgh) Abstract Abstract We consider the cycle packing problems motivated by kidney exchange. In kidney exchange, patients with willing but incompatible donors enter into an organized market and trade donors in cyclic structures. Exchange programs attempt to match patients and donors utilizing the quality of matches. Current methods use a point estimate for the utility of a potential match that is drawn from an unknown distribution over possible true qualities. We apply the conditional value-at-risk paradigm to the size-constrained cycle and chain packing problem. We derive sample average approximation and distributionally-robust-optimization approaches to maximize the true quality of matched organs in the face of uncertainty over the quality of potential matches. We test our approach on the realistic kidney exchange data and show they outperform the state-of-the-art approaches. In the experiments, we use randomly generated exchange graphs resembling the structure of real exchanges, using anonymized data from the United Network for Organ Sharing. Physics of Decision: Application to Polling Place Risk Management Thibaut Cerabona and Frederick Benaben (IMT Mines Albi) and Benoit Montreuil, Ali Vatankhah Barenji, and Dima Nazzal (H. Milton Stewart School of Industrial and Systems Engineering) Abstract Abstract Managing a system involves defining, assessing and trying to reach objectives. Objectives are often measured using Key Performance Indicators (KPIs). In the context of instability (crisis, global pandemic or just everyday uncertainty), managers have to adapt to multi-dimensional complex situations. This article introduces an innovative approach of risk and opportunity management to help managers in their decision-making processes. This approach enables managers to deal with the considered system’s performance trajectory by viewing and assessing the impact of potentialities (risks and opportunities). Potentiality impacts are designed by forces, modifying the system’s performance trajectory and its position within its multi-dimensional KPI framework. This approach is illustrated by an application to polling place risk management. This paper presents the results of simulations of a such place confronting pre-identified risks within a KPI framework. Vendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Modeling Methodology [Virtual] Queueing Models Chair: Veronica Gil-Costa (UNSL, CCT CONICET San Luis) MEASURING THE OVERLAP WITH OTHER CUSTOMERS IN THE SINGLE SERVER QUEUE Sergio Palomo and Jamol Pender (Cornell University) Abstract Abstract The single server queue is one of the most basic queueing systems to model stochastic waiting dynamics. Most work involving the single server queue only analyzes the customer or agent behavior. However, in this work, we are inspired by COVID-19 applications and are interested in the interaction between customers and more specifically, the time that adjacent customers overlap in the queue. To this end, we derive a new recursion for this overlap time and study the steady state behavior of the overlap time via simulation and probabilistic analysis. We find that the overlap time between adjacent customers in the M/M/1 queue has a conditional distribution that is given by an exponential distribution, however, as the distance between customers grows, the probability of a non-negative overlap time decreases geometrically. We also find via simulation that the exponential distribution still holds when the distribution is non-exponential, hinting at a more general result. Calibrating Infinite Server Queueing Models Driven by Cox Process Ruixin Wang and Harsha Honnappa (Purdue University) Abstract Abstract This paper studies the problem of calibrating a $\text{Cox}/G/\infty$ infinite server queue to a dataset consisting of the number in the system and the age of the jobs currently in service, sampled at discrete time points. This calibration problem is complicated owing to the fact that the arrival intensity and the service time distribution must be jointly calibrated. Furthermore, maximizing the finite dimensional distribution (FDD) of the number-in-system process (which is the natural calibration objective) is intractable in this setting, since the computation of the FDDs involves an intractable integration over the path measure of the Cox input process. We derive an approximate inference procedure that maximizes a lower bound to the FDDs using stochastic gradient descent. This lower bound is tight when the calibrated parameters coincide with those of the `true' model. We present extensive numerical experiments that demonstrate the efficacy and validity of the proposed method. Hide Your Model! Layer Abstractions for Data-Driven Co-Simulations Moritz Gütlein, Reinhard German, and Anatoli Djanatliev (FAU Erlangen-Nuremberg) Abstract Abstract Modeling and simulating of problems that span across multiple domains can be tricky. Often, the need for a co-simulation arises, for example because the modeling cannot be done with a single tool. Domain experts may face a barrier when it comes to the implementation of such a co-simulation. In addition, the demand for integrating data from various sources into simulation models seems to be growing. Therefore, we propose an abstraction concept that hides simulators and models behind generalized interfaces that are derived from prototypical classes. The data-driven abstraction concept facilitates having an assembly kit with predefined simulator building blocks that can be easily plugged together. Furthermore, data streams can be seamlessly ingested into such a composed model. Likewise, the co-simulation can be accessed via the resulting interfaces for further processing and interactions. Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [Virtual] V&V for M&S of Complex Systems Chair: Hessam Sarjoughian (Arizona State University) Composability Verificiation Of Complex Systems Using Colored Petri Nets Imran Mahmood and Syed Hassan Askari (National University of Sciences and Technology, School of Electrical Engineering and Computer Science) and Hessam S. Sarjoughian (Arizona State University; School of Computing, Information, and Decision Systems Engineering) Abstract Abstract The discipline of component-based modeling and simulation offers promising gains including reduction in development cost, time, and system complexity. This paradigm promotes the use and reuse of modular components for adequate development of complex simulations. Achieving effective and meaningful model reuse through the composition of components still remains a daunting challenge. “Composability”, an integral part of this challenge, is the capability to select and assemble model components in various combinations to satisfy specific user requirements. In this paper we propose the use of Colored Petri Nets for component oriented model development, model composition and the verification of composed models using state-space analysis technique. We present a case study of an elevator model as a proof of concept. Our case study explains the proposed process of developing and composing CPN based model components, and verifying the composed model using state-space analysis. Simulation and Model Validation for Mental Health Factors Using Multi-Methodology Hybrid Approach Arsineh Boodaghian Asl, Jayanth Raghothama, Adam Darwich, and Sebastiaan Meijer (KTH Royal Institute of Technology) Abstract Abstract To promote policy analysis and decision-making in mental health and well-being, simulations are used to scrutinize causal maps and provide policymakers reasonable evidence. This paper proposes and illustrates a multi-methodology hybrid approach by building a hierarchy of models, moving from a systems dynamics model to a simulation based on PageRank to quantify and assess a complex mental health map. The motives are: (1) to aid scenario analysis and comparison for possible policy interventions, (2) to quantify and validate mental health factors, and (3) to gain new insights into the core and confounding factors that affect mental health. The results indicate that the approach identifies factors that cause significant and frequent variation on mental health. Furthermore, validation confirms PageRank accuracy and detects minor fluctuations and variation in model's output behavior. Data-driven Modelling of Repairable Fault Trees From Time Series Data With Missing Information Parisa Niloofar and Sanja Lazarova-Molnar (University of Southern Denmark, SDU) Abstract Abstract Fault tree analysis is one of the most popular techniques for dependability analysis of a wide range of systems. True fault-related behavior of a system would be more accurately reflected if the system’s fault tree is derived from a combination of observational data and expert knowledge, rather than expert knowledge alone. The concept of learning fault trees from data becomes more significant when systems change their behaviors during their lifetimes. We present an algorithm for learning fault trees of systems with missing information on fault occurrences of basic events. This algorithm extracts repairable fault trees from incomplete multinomial time series data, and then uses simulation to estimate the system’s reliability measures. Our algorithm is not limited to exponential distributions or binary events. Furthermore, we assess the sensitivity of our algorithm to different percentages of missingness and amounts of available data. Advanced Tutorial · Advanced Tutorials [Virtual] Toward Unbiased Deterministic Total Orderings of Parallel Simulations with Simultaneous Events Chair: Soumyadip Ghosh (IBM T. J. Watson Research Center) Toward Unbiased Deterministic Total Orderings of Parallel Simulations with Simultaneous Events Neil McGlohon and Christopher D. Carothers (Rensselaer Polytechnic Institute) Abstract Abstract In the area of discrete event simulation (DES), event simultaneity occurs when any two events are scheduled to happen at the same point in simulated time. Since events in DES are the sole mechanism for state change, ensuring consistent real-time event processing order is crucial to maintaining deterministic execution. This is synonymous with finding a consistent total ordering of events. Commercial Case Study · Commercial Case Studies [Virtual] Supply Chains and Virtual Plants Chair: David T. Sturrock (Simio LLC) Increasing Efficiency of Central Mexico Berry DC Network Khaled Mabrouk (Sustainable Productivity Solutions) Abstract Abstract Berry are delivered to our shelves year-round, and various regions are each capable of harvesting for roughly 6-8 months per year. Central Mexico has grown into a significant source of berries during the Winter & Spring months. With this growth in Central Mexico production, a California Central Coast Berry Producer utilized simulation to vet out whether to commit to a Hub strategy for managing their packaging?, and if yes, what are the optimal inventory management policies. This project is a few years old, and the Berry Producer has since expanded the use of Hub strategy to additional regions in their network. Supply Chain Reduction in Regulated Markets Andreas Schoechtel (Thermo Electron LED GmbH, Durham University Business School) and Riccardo Mogre (Durham University Business School) Abstract Abstract This paper investigates the impact of a complexity-reduction project conducted in the life science industry for regulated products. The paper presents the complex infrastructure of antibody manufacturing and distribution and benefits which can be gained by applying discrete-event simulation to reduce complexity in an regulated environment. The digital twin model allows us to evaluate multiple scenarios to support decision making when redesigning a supply chain for regulated products. The base case and the complexity-reduced case allow us to evaluate the potential benefits of complexity reduction on the total supply chain costs and other key performance indicators. Based on our analysis, we identify significant improvements. Virtual Factory for Formulation Plants in Life Science Manufacturing Jerome Frutiger, Manuel Pereira Remelhe, and Andrea Vester (Bayer AG) Abstract Abstract This case study provides an overview of the Virtual Factory concept developed for Chemical and Pharmaceutical Manufacturing Facilities at Bayer AG. First, we will outline the process optimization challenges of formulation plants for crop protection agents as well as pharmaceutical products. Then we will cover input data analysis, the modeling concept of the Virtual Factories, what-if-scenario optimizations, real-time data connection procedures, and simulation-based planning & scheduling. We will conclude by providing our practitioner’s viewpoint on the various future opportunities and the road to Digital Twins. Technical Session · Simulation Optimization [Virtual] Ranking & Selection Chair: Sait Cakmak (Georgia Institute of Technology) Estimation When Both Covariance and Precision Matrices Are Sparse Shev MacNamara and Erik Schlogl (University of Technology Sydney) and Zdravko Botev (University of New South Wales) Abstract Abstract We offer a method to estimate a covariance matrix in the special case that both the covariance matrix and the precision matrix are sparse --- a constraint we call double sparsity. The estimation method is maximum likelihood, subject to the double sparsity constraint. In our method, only a particular class of sparsity pattern is allowed: both the matrix and its inverse must be subordinate to the same chordal graph. This includes the class of banded matrices with banded inverses, for example. Compared to a naive enforcement of double sparsity, our chordal graph approach exploits a special algebraic local inverse formula. This local inverse property makes computations that would usually involve an inverse of either precision matrix, or covariance matrix, much faster. In the context of estimation of covariance matrices, our proposal appears to be the first to find such special pairs of covariance and precision matrices. Contextual Ranking and Selection with Gaussian Processes Sait Cakmak (Georgia Institute of Technology), Siyang Gao (City University of Hong Kong), and Enlu Zhou (Georgia Institute of Technology) Abstract Abstract In many real world problems, we are faced with the problem of selecting the best among a finite number of alternatives, where the best alternative is determined based on context specific information. In this work, we study the contextual Ranking and Selection problem under a finite arm - finite context setting, where we aim to find the best alternative for each context. We use a separate Gaussian process to model the reward for each arm, derive the large deviations rate function for both the expected and worst-case contextual probability of correct selection, and propose an iterative algorithm for maximizing the rate function. Numerical experiments show that our algorithm is highly competitive in terms of sampling efficiency, while having significantly smaller computational overhead. Expected Value of Information Methods for Contextual Ranking and Selection: Clinical Trials and Simulation Optimization Andres Alban, Stephen E. Chick, and Spyros Zoumpoulis (INSEAD) Abstract Abstract We consider the contextual ranking and selection problem that aims to learn the best treatment as a function of covariates when expected outcomes are unknown but can be learned from noisy observations. We develop a sequential allocation policy based on Bayesian expected value of information methods, called fEVI, to learn the best treatment for a finite set of covariates. We observe good performance of the fEVI allocation policy in simulation experiments and find that prior distributions which accurately reflect correlations across treatments and patient types can improve sampling effectiveness with limited sample sizes. We compare the performance between the case when covariates are random arrivals from a population, and the case when the allocation policy chooses covariates. In experiments, the benefit of fEVI over allocation policies that sample randomly is much larger than the benefit from being able to choose covariates, or from using a prior that accurately reflects correlations. Plenary · PhD Colloquium PhD Colloquium Keynote Chair: Chang-Han Rhee (Northwestern University) New Trends in Simulation and Decision Making Under Uncertainty Jose Blanchet (Stanford University) Abstract Abstract Stochastic simulation comprises a powerful set of tools for decision-making under uncertainty. These tools are being rapidly absorbed in a wide range of data analytics applications. But as new technologies disseminate and open the door for widespread applicability of state-of-the-art solutions, also new challenges and opportunities arise for simulation. For example, application areas such as energy systems, finance, healthcare, pricing, sustainability, and transportation, provide instances in which modeling, simulation, and data analytics lie at key decision-making problems. Many of these problems share common characteristics. They are large scale, in many cases, they are ill-posed or poorly specified, they demand robust solutions, their natural formulation is computationally hard, and in many cases, they require interpretable solutions because a human is ultimately responsible for the impactful consequences of a policy. This talk will expose some of these challenges and some methodological ideas that are currently being developed to address these challenges. Doctoral Colloquium · PhD Colloquium PhD Colloquium I Chair: Chang-Han Rhee (Northwestern University) Hyperparameter Optimization of Deep Neural Network with Applications to Medical Device Manufacturing Gautham Sunder (University of Minnesota, Carlson School of Management) Abstract Abstract Bayesian Optimization (BO), a class of Response Surface Optimization (RSO) methods for nonlinear functions, is a commonly adopted strategy for hyperparameter optimization (HPO) of Deep Neural Networks (DNNs). Through a case study at a medical device manufacturer, we empirically illustrate that, in some cases, HPO problems can be well approximated by a second-order polynomial model, and in such cases, classical response surface optimization (C-RSO) methods are demonstrably more efficient than BO. In this study, we propose Compound-RSO, a highly efficient three-stage batch sequential strategy for RSO when there is uncertainty in the complexity of the response surface. Through a simulation study and a case study at a medical device manufacturer we illustrate that Compound-RSO is more efficient than BO for approximating a second-order response surface and has comparable results to BO when the response surface is complex and nonlinear. Contextual Ranking and Selection with Gaussian Processes Sait Cakmak (Georgia Institute of Technology) Abstract Abstract In many real world problems, we are faced with the problem of selecting the best among a finite number of alternatives, where the best alternative is determined based on context specific information. In this work, we study the contextual Ranking and Selection problem under a finite arm - finite context setting, where we aim to find the best alternative for each context. We use a separate Gaussian process to model the reward for each arm, derive the large deviations rate function for both the expected and worst-case contextual probability of correct selection, and propose an iterative algorithm for maximizing the rate function. Numerical experiments show that our algorithm is highly competitive in terms of sampling efficiency, while having significantly smaller computational overhead. Estimating the Effectiveness of Non-pharmaceutical Interventions in Heterogeneous Populations During an Emerging Infectious Disease Epidemic Johannes Ponge (University of Münster) Abstract Abstract Non-pharmaceutical interventions (NPIs) such as quarantining or school-closures are immediate measures to contain the diffusion of an emerging infectious disease in a susceptible population in absence of therapeutics or vaccinations. However, the effectiveness of containment measures strongly depends on regionally heterogeneous demographic structures (imagine school-closures which yield limited contributions to disease containment in areas with little to no school-age population). In my work, I present a pathogen-generic agent-based simulation approach to produce regional estimations for the effectiveness of NPIs. The thesis consists of three blocks. First, an approach to generate realistic synthetic populations based on publicly available census data. Second, a modular agent-based model architecture to enable simulations of various pathogens, populations, and NPIs. Third, a case study demonstrating the evaluation of regional NPI effectiveness in the context of the German COVID-19 epidemic. I suggest that my work will support the development of more sophisticated intervention strategies during emerging epidemics. Simulation Optimization for a Digital Twin Using a Multi-fidelity Framework Yiyun Cao (University of Southampton) Abstract Abstract Digital twin technology is increasingly ubiquitous in manufacturing and there is a need to increase the efficiency of optimization methods that use digital twins to answer questions about the real system. These methods typically support short-term operational decisions and, as a result, optimization methods need to return results in real or near-to-real time. This is especially challenging in manufacturing systems as the simulation models are typically large and complex. In this extended abstract, we briefly describe an algorithm for a multi-fidelity model that uses a simpler low-fidelity neural network metamodel in the first stage of the optimization and a high-fidelity simulation model in the second stage. It is designed to find a good solution using a relatively small number of replications of high-fidelity models for the problems having more alternatives than conventional ranking and selection procedures are capable of. Applying Discrete-event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant Ana Carolina M. Moreira (Auburn University) Abstract Abstract This paper aims to apply a combination of Value Stream Mapping (VSM) and Discrete-Event Simulation (DES) in an automotive engine manufacturing plant. First, a current state VSM was created and the sources of waste were identified. The Leak Test area and engine impregnation process were identified as major sources of waste. Based on that, two potential improvement scenarios were developed and analyzed using DES. The simulation was used to compare key measures of performance in the current state and the proposed scenarios, using different setting for adjustable system parameters. Results showed improvements of up to 29% in annual engine impregnation cost for one scenario, without detriment to other measures. The study’s major takeaway is demonstrating that VSM in conjunction with DES is a powerful alternative in studying changes in production processes, which leverages the advantages of both methodologies. Real-time Generation and Exploitation of Discrete Event Simulation Models as Decision-support Tools for Manufacturing Giovanni Lugaresi (Politecnico di Milano) Abstract Abstract Complex manufacturing systems require digital decision-support tools for optimal production planning and control. Discrete event simulation models can guarantee the ability to take prompt decisions at any time, provided the availability of an up-to-date model. Hence, techniques for a prompt model generation or adaptation to the physical system have to be developed. Literature is rich of approaches to generate digital models from available datasets. Yet, such techniques are mostly suited for managerial processes and cannot support properly manufacturing applications. This research regards the automated model generation for production systems. The research is beneficial for the development of real-time decision-support system in manufacturing environments. Towards Semi-automatic Model Specification David Shuttleworth (Old Dominion University) Abstract Abstract This paper presents a natural language understanding (NLU) approach to transition a description of a phenomenon towards a simulation specification. As multidisciplinary endeavors using simulations increase, the need for teams to better communicate and make non-modelers active participants on the process increases. We focus on semi-automating the model conceptualization process towards the creation of a specification as it is one of the most challenging steps in collaborations. The approach relies on NLU processing of narratives, creates a model that captures concepts and relationships, and finally provides a simulation implementation specification. An initial definition set and grammatical rules are proposed to formalize this process. A Design of Experiments was used to test the NLU model accuracy for a test case that generates Agent-Based Model (ABM) conceptualizations and specifications. We provide a discussion on the advantages and limitations of using NLUs for model conceptualization and specification processes. Modeling Multi-Level Patterns of Environmental Migration in Bangladesh: An Agent-Based Approach Kelsea B. Best (Vanderbilt University) Abstract Abstract Environmental change interacts with population migration in complex ways that depend on interactions between impacts on individual households and on communities. These coupled individual-collective dynamics make agent-based simulations useful for studying environmental migration. We present an original agent-based model that simulates environment-migration dynamics in terms of the impacts of natural hazards on labor markets in rural communities, with households deciding whether to migrate based on maximizing their expected income. We use a pattern-oriented approach that seeks to reproduce observed patterns of environmentally-driven migration in Bangladesh. The model is parameterized with empirical data and unknown parameters are calibrated to reproduce the observed patterns. This model can reproduce these patterns, but only for a narrow range of parameters. Future work will compare income-maximizing decisions to psychologically complex decision heuristics that include non-economic considerations. Simulation Model Simplification Extended Abstract Igor Stogniy (Technische Universität Dresden) Abstract Abstract There is a need to use simplified simulation models to optimize production planning. However, the simplification is usually based on the gut feeling of experts who do not have time to analyze the various concepts in detail. In this research, a detailed analysis of the simulation model simplification by substituting operations for constant delays was performed under conditions close to the real world. A statistical model was developed to calculate the delay values. The simplification results based on the statistical model are compared with the results based on the detailed model. The experiments were carried out based on the MIMAC dataset 5 model. Higher-Order Coverage Error Analysis for Batching and Sectioning Shengyi He (Columbia University) Abstract Abstract While batching and sectioning have been widely used in simulation, it is open regarding their higher-order coverage behaviors and whether one is better than the other in this regard. We develop techniques to obtain higher-order coverage errors for sectioning and batching via an Edgeworth-type expansion on t-statistics. Based on our expansion, we give insights on the effect of the number of batches on the coverage error. Moreover, we theoretically argue that none of batching or sectioning is uniformly better than the other in terms of coverage, but sectioning usually has a smaller coverage error when the number of batches is large. We also support our theoretical findings via numerical experiments. Doctoral Colloquium · PhD Colloquium PhD Colloquium II Chair: Chang-Han Rhee (Northwestern University) Farming for Mining: Combining Data Farming and Data Mining to Gain Knowledge in Supply Chains Joachim Hunker (Technische Universität Dortmund) Abstract Abstract Knowledge discovery in databases (KDD) is a frequently used method in the context of supply chains (SC). The core phase of KDD is known under the term data mining. To gain knowledge, e.g., to support decisions in supply chain management (SCM), input data for KDD are necessary. Typically, such data consist of observational data, which have to be preprocessed before the data mining phase. Besides relying on observational data, simulation can be used to generate data as an input for the process of knowledge discovery. The process of using a simulation model as a data generator is called data farming. To link both data farming and KDD, a Farming-for-Mining Framework has been developed, where the data farming process generates data as an input for the KDD process to support decisions in SCM. A Simulation Analysis of Analytics-driven Community-based Reintegration Programs Iman Attari (Indiana University Bloomington) Abstract Abstract We develop a data-driven simulation model in partnership with Tippecanoe County Community Corrections to evaluate assignment policies of reintegration programs. These programs are intended to help clients with their transition back to society after release, with the goal of ending the "revolving door of recidivism". Leveraging client-level and system-level data, we develop a queueing-based network model to capture the movement of clients in the system. We integrate a personalized recidivism prediction to capture heterogeneous risks, along with estimated effects of reintegration programs from literature. Using simulation, we find that the largest benefit is achieved by implementing any kind of re-integration program, regardless of assignment policy, as the savings in the societal and re-incarceration costs (from recidivism) outweigh program costs. Assignment policy based on predictive analytics achieves a 1.5-time larger reduction in recidivism compared to current practice. In expanding capacity, greater consideration should be given to investing in analytics-driven program assignments. Optimal Scheduling of a Multi-Clinic Healthcare Facility in the Course of a Pandemic Hossein Piri (University of British Columbia) Abstract Abstract Due to the social distancing requirement during the COVID-19, the elevator capacity in high-rise buildings has been reduced by 50-70 %. The reduced elevator capacity results in queue build-up and increases the elevator wait time, which makes social distancing challenging in the lobby and elevator halls. This could increase the chance of the spread of the disease and would pose significant safety risks. Therefore, it is necessary to design an intervention that could help safely managing the elevator queues and reduce the elevator wait times. In this work, we focus on minimizing the elevator wait time in a multi-clinic facility by controlling the people arriving at the elevator halls, which is possible by optimizing the clinic schedule. Investigating Cloud-based Architecture for Distributed Simulation (ds) in Operational Reasearch (or) Nura Tijjani Abubakar (Brunel University London; Jigawa State Institute of IT, Kazaure) Abstract Abstract For decades, Modeling & Simulation (M&S) have been the choice for Operations Research and Management Science (OR/MS) to analyze systems behaviors. The evolution of M&S brings the Distributed Simulation (DS) - High-Level Architecture (HLA), used mainly by defense applications, thus allowing researchers to compose models which run on different processors. As cloud computing grows, its capabilities upgraded many applications, including M&S having the elasticity needed for DS to speed up simulation by bringing reusable models together with interoperability. This paper presents a proposed cloud-based DS deployment architecture and development framework for OR analysts. Characterizing the Distributions of Commits in Large Source Code Repositories Aradhana Soni (The University of Tennessee) Abstract Abstract Modern software development is based on software repositories and changes committed to those repositories. However, there is an inadequate insight into the nature of changes committed to repositories of different sizes. A data-based characterization of commit activity in large software hubs contributes to a better understanding of software development and can feed into early detection of bugs at the earliest phases. Here, we present preliminary results from characterizing the distribution of 452 million commits in a metadata listing from GitHub repositories. Based on multiple distributions, we find the best fits and second best fits across different ranges in the data. The characterization is aimed at synthetic repository generation suitable for use in simulation and machine learning. Simulation of Video Analytic Applications Using Deep Learning Dipak Pudasaini (Ryerson University) Abstract Abstract The traditional approaches for video analytics are only in the cloud that requires high latency and more network bandwidth to transform data. To minimize these problems, Video Analytic Data Reduction Model (VADRM) using deep learning by implementing CNN-based edge computing modules is proposed. By implementing on CNN-based video processing, VADRM divides the video analytic jobs into smaller tasks for small processing edge nodes. Examining this technique and finding a solution in a real system is very expensive. Therefore, the results of the prototype model is used for the simulation to test the problem and alleviate the bottlenecks. The proposed solution is to develop an architecture that integrates IoT with edge and cloud to minimize network bandwidth and latency. In this work, simulation is performed in iFogSim and simulated results show that the integration of the edge and cloud based model using VADRM produces 85% more performance than only cloud-based approach. Using Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease Daniel Otero-Leon (University of Michigan) Abstract Abstract Continuous tracking of patients’ health data through electronic health records (EHRs) has created an opportunity to predict healthcare policies’ long-term impacts. Despite the advances in EHRs, data may be missing or sparsely collected. This article develops a simulation model to test multiple treatment guidelines for cardiovascular disease (CVD) prevention. Our methodology uses the EM algorithm to fit sparse health data and a discrete-time Monte-Carlo simulation model to test guidelines for different patient demographics. Our results suggest that, among published guidelines, those focusing on reducing CVD risk can reduce treatment without increasing the risk of severe health outcomes. Why does Facebook fail to catalyze diverse friendship formations? Firman M. Firmansyah (Stony Brook University) Abstract Abstract This study sought to understand why Facebook, the largest social networking site intended to “bring the world closer together”, fails to catalyze diverse friendship formations. In doing so, it employs agent-based modeling built on the Framework for Intergroup Relations and Multiple Affiliations Networks (FIRMAN). As demonstrated in 600 simulations, Facebook has primarily enhanced users' tie capacity (TC) to maintain a larger number of friendships while doing little to empower users' tie outreachability (TO) to tolerate group differences. These conditions inevitably hinder diverse friendship formations on Facebook. Methodological Improvements Of Online Pandemic Simulation For Short-term Healthcare Resource Prediction Daniel Garcia-Vicuña (Public University of Navarre) Abstract Abstract Short-term hospital resource prediction is critical during pandemic healthcare crises. Simulation models are able to mimic the dynamics of a hospital during pandemic waves and to be used as short-term hospital resource prediction tools. However, the developed simulation models have to focus on the transition period of the health system rather than the stationary state as it is usual in simulation studies. In this presentation, we relate methodological challenges faced to develop data-driven simulation models that account for the variability and uncertainty of the pandemic evolution. In particular, we focus on the proposal of new estimators of the probability of admission to the Intensive Care Unit (ICU) and the length of stay of patients in the regular ward and in the ICU. The simulation models were used daily during the COVID-19 pandemic waves by the Spanish health administrations. A Parallel Algorithm to Execute DEVS Simulations in Shared Memory Architectures Guillermo G. Trabes (Carleton University) Abstract Abstract As the Discrete Event Systems Specification (DEVS) formalism becomes more popular and used in more fields of application, simulations are more complex and time consuming. For this reason, we need to develop new ideas to execute them efficiently. In this work we propose a parallel algorithm to execute DEVS simulations in shared memory architectures. Our approach guarantees a simple and error free parallel execution without adding much overhead. In addition, we perform an experimental evaluation showing how we can accelerate the execution several times faster with our approach. Doctoral Colloquium · PhD Colloquium PhD Colloquium Poster Session Chair: Chang-Han Rhee (Northwestern University) Hyperparameter Optimization of Deep Neural Network with Applications to Medical Device Manufacturing Gautham Sunder (University of Minnesota, Carlson School of Management) Abstract Abstract Bayesian Optimization (BO), a class of Response Surface Optimization (RSO) methods for nonlinear functions, is a commonly adopted strategy for hyperparameter optimization (HPO) of Deep Neural Networks (DNNs). Through a case study at a medical device manufacturer, we empirically illustrate that, in some cases, HPO problems can be well approximated by a second-order polynomial model, and in such cases, classical response surface optimization (C-RSO) methods are demonstrably more efficient than BO. In this study, we propose Compound-RSO, a highly efficient three-stage batch sequential strategy for RSO when there is uncertainty in the complexity of the response surface. Through a simulation study and a case study at a medical device manufacturer we illustrate that Compound-RSO is more efficient than BO for approximating a second-order response surface and has comparable results to BO when the response surface is complex and nonlinear. Modeling Multi-Level Patterns of Environmental Migration in Bangladesh: An Agent-Based Approach Kelsea B. Best (Vanderbilt University) Abstract Abstract Environmental change interacts with population migration in complex ways that depend on interactions between impacts on individual households and on communities. These coupled individual-collective dynamics make agent-based simulations useful for studying environmental migration. We present an original agent-based model that simulates environment-migration dynamics in terms of the impacts of natural hazards on labor markets in rural communities, with households deciding whether to migrate based on maximizing their expected income. We use a pattern-oriented approach that seeks to reproduce observed patterns of environmentally-driven migration in Bangladesh. The model is parameterized with empirical data and unknown parameters are calibrated to reproduce the observed patterns. This model can reproduce these patterns, but only for a narrow range of parameters. Future work will compare income-maximizing decisions to psychologically complex decision heuristics that include non-economic considerations. Towards Semi-automatic Model Specification David Shuttleworth (Old Dominion University) Abstract Abstract This paper presents a natural language understanding (NLU) approach to transition a description of a phenomenon towards a simulation specification. As multidisciplinary endeavors using simulations increase, the need for teams to better communicate and make non-modelers active participants on the process increases. We focus on semi-automating the model conceptualization process towards the creation of a specification as it is one of the most challenging steps in collaborations. The approach relies on NLU processing of narratives, creates a model that captures concepts and relationships, and finally provides a simulation implementation specification. An initial definition set and grammatical rules are proposed to formalize this process. A Design of Experiments was used to test the NLU model accuracy for a test case that generates Agent-Based Model (ABM) conceptualizations and specifications. We provide a discussion on the advantages and limitations of using NLUs for model conceptualization and specification processes. Contextual Ranking and Selection with Gaussian Processes Sait Cakmak (Georgia Institute of Technology) Abstract Abstract In many real world problems, we are faced with the problem of selecting the best among a finite number of alternatives, where the best alternative is determined based on context specific information. In this work, we study the contextual Ranking and Selection problem under a finite arm - finite context setting, where we aim to find the best alternative for each context. We use a separate Gaussian process to model the reward for each arm, derive the large deviations rate function for both the expected and worst-case contextual probability of correct selection, and propose an iterative algorithm for maximizing the rate function. Numerical experiments show that our algorithm is highly competitive in terms of sampling efficiency, while having significantly smaller computational overhead. Simulation Optimization for a Digital Twin Using a Multi-fidelity Framework Yiyun Cao (University of Southampton) Abstract Abstract Digital twin technology is increasingly ubiquitous in manufacturing and there is a need to increase the efficiency of optimization methods that use digital twins to answer questions about the real system. These methods typically support short-term operational decisions and, as a result, optimization methods need to return results in real or near-to-real time. This is especially challenging in manufacturing systems as the simulation models are typically large and complex. In this extended abstract, we briefly describe an algorithm for a multi-fidelity model that uses a simpler low-fidelity neural network metamodel in the first stage of the optimization and a high-fidelity simulation model in the second stage. It is designed to find a good solution using a relatively small number of replications of high-fidelity models for the problems having more alternatives than conventional ranking and selection procedures are capable of. Why does Facebook fail to catalyze diverse friendship formations? Firman M. Firmansyah (Stony Brook University) Abstract Abstract This study sought to understand why Facebook, the largest social networking site intended to “bring the world closer together”, fails to catalyze diverse friendship formations. In doing so, it employs agent-based modeling built on the Framework for Intergroup Relations and Multiple Affiliations Networks (FIRMAN). As demonstrated in 600 simulations, Facebook has primarily enhanced users' tie capacity (TC) to maintain a larger number of friendships while doing little to empower users' tie outreachability (TO) to tolerate group differences. These conditions inevitably hinder diverse friendship formations on Facebook. Real-time Generation and Exploitation of Discrete Event Simulation Models as Decision-support Tools for Manufacturing Giovanni Lugaresi (Politecnico di Milano) Abstract Abstract Complex manufacturing systems require digital decision-support tools for optimal production planning and control. Discrete event simulation models can guarantee the ability to take prompt decisions at any time, provided the availability of an up-to-date model. Hence, techniques for a prompt model generation or adaptation to the physical system have to be developed. Literature is rich of approaches to generate digital models from available datasets. Yet, such techniques are mostly suited for managerial processes and cannot support properly manufacturing applications. This research regards the automated model generation for production systems. The research is beneficial for the development of real-time decision-support system in manufacturing environments. Characterizing the Distributions of Commits in Large Source Code Repositories Aradhana Soni (The University of Tennessee) Abstract Abstract Modern software development is based on software repositories and changes committed to those repositories. However, there is an inadequate insight into the nature of changes committed to repositories of different sizes. A data-based characterization of commit activity in large software hubs contributes to a better understanding of software development and can feed into early detection of bugs at the earliest phases. Here, we present preliminary results from characterizing the distribution of 452 million commits in a metadata listing from GitHub repositories. Based on multiple distributions, we find the best fits and second best fits across different ranges in the data. The characterization is aimed at synthetic repository generation suitable for use in simulation and machine learning. Simulation of Video Analytic Applications Using Deep Learning Dipak Pudasaini (Ryerson University) Abstract Abstract The traditional approaches for video analytics are only in the cloud that requires high latency and more network bandwidth to transform data. To minimize these problems, Video Analytic Data Reduction Model (VADRM) using deep learning by implementing CNN-based edge computing modules is proposed. By implementing on CNN-based video processing, VADRM divides the video analytic jobs into smaller tasks for small processing edge nodes. Examining this technique and finding a solution in a real system is very expensive. Therefore, the results of the prototype model is used for the simulation to test the problem and alleviate the bottlenecks. The proposed solution is to develop an architecture that integrates IoT with edge and cloud to minimize network bandwidth and latency. In this work, simulation is performed in iFogSim and simulated results show that the integration of the edge and cloud based model using VADRM produces 85% more performance than only cloud-based approach. Optimal Scheduling of a Multi-Clinic Healthcare Facility in the Course of a Pandemic Hossein Piri (University of British Columbia) Abstract Abstract Due to the social distancing requirement during the COVID-19, the elevator capacity in high-rise buildings has been reduced by 50-70 %. The reduced elevator capacity results in queue build-up and increases the elevator wait time, which makes social distancing challenging in the lobby and elevator halls. This could increase the chance of the spread of the disease and would pose significant safety risks. Therefore, it is necessary to design an intervention that could help safely managing the elevator queues and reduce the elevator wait times. In this work, we focus on minimizing the elevator wait time in a multi-clinic facility by controlling the people arriving at the elevator halls, which is possible by optimizing the clinic schedule. Investigating Cloud-based Architecture for Distributed Simulation (ds) in Operational Reasearch (or) Nura Tijjani Abubakar (Brunel University London; Jigawa State Institute of IT, Kazaure) Abstract Abstract For decades, Modeling & Simulation (M&S) have been the choice for Operations Research and Management Science (OR/MS) to analyze systems behaviors. The evolution of M&S brings the Distributed Simulation (DS) - High-Level Architecture (HLA), used mainly by defense applications, thus allowing researchers to compose models which run on different processors. As cloud computing grows, its capabilities upgraded many applications, including M&S having the elasticity needed for DS to speed up simulation by bringing reusable models together with interoperability. This paper presents a proposed cloud-based DS deployment architecture and development framework for OR analysts. Simulation Model Simplification Extended Abstract Igor Stogniy (Technische Universität Dresden) Abstract Abstract There is a need to use simplified simulation models to optimize production planning. However, the simplification is usually based on the gut feeling of experts who do not have time to analyze the various concepts in detail. In this research, a detailed analysis of the simulation model simplification by substituting operations for constant delays was performed under conditions close to the real world. A statistical model was developed to calculate the delay values. The simplification results based on the statistical model are compared with the results based on the detailed model. The experiments were carried out based on the MIMAC dataset 5 model. A Simulation Analysis of Analytics-driven Community-based Reintegration Programs Iman Attari (Indiana University Bloomington) Abstract Abstract We develop a data-driven simulation model in partnership with Tippecanoe County Community Corrections to evaluate assignment policies of reintegration programs. These programs are intended to help clients with their transition back to society after release, with the goal of ending the "revolving door of recidivism". Leveraging client-level and system-level data, we develop a queueing-based network model to capture the movement of clients in the system. We integrate a personalized recidivism prediction to capture heterogeneous risks, along with estimated effects of reintegration programs from literature. Using simulation, we find that the largest benefit is achieved by implementing any kind of re-integration program, regardless of assignment policy, as the savings in the societal and re-incarceration costs (from recidivism) outweigh program costs. Assignment policy based on predictive analytics achieves a 1.5-time larger reduction in recidivism compared to current practice. In expanding capacity, greater consideration should be given to investing in analytics-driven program assignments. Higher-Order Coverage Error Analysis for Batching and Sectioning Shengyi He (Columbia University) Abstract Abstract While batching and sectioning have been widely used in simulation, it is open regarding their higher-order coverage behaviors and whether one is better than the other in this regard. We develop techniques to obtain higher-order coverage errors for sectioning and batching via an Edgeworth-type expansion on t-statistics. Based on our expansion, we give insights on the effect of the number of batches on the coverage error. Moreover, we theoretically argue that none of batching or sectioning is uniformly better than the other in terms of coverage, but sectioning usually has a smaller coverage error when the number of batches is large. We also support our theoretical findings via numerical experiments. Applying Discrete-event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant Ana Carolina M. Moreira (Auburn University) Abstract Abstract This paper aims to apply a combination of Value Stream Mapping (VSM) and Discrete-Event Simulation (DES) in an automotive engine manufacturing plant. First, a current state VSM was created and the sources of waste were identified. The Leak Test area and engine impregnation process were identified as major sources of waste. Based on that, two potential improvement scenarios were developed and analyzed using DES. The simulation was used to compare key measures of performance in the current state and the proposed scenarios, using different setting for adjustable system parameters. Results showed improvements of up to 29% in annual engine impregnation cost for one scenario, without detriment to other measures. The study’s major takeaway is demonstrating that VSM in conjunction with DES is a powerful alternative in studying changes in production processes, which leverages the advantages of both methodologies. Using Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease Daniel Otero-Leon (University of Michigan) Abstract Abstract Continuous tracking of patients’ health data through electronic health records (EHRs) has created an opportunity to predict healthcare policies’ long-term impacts. Despite the advances in EHRs, data may be missing or sparsely collected. This article develops a simulation model to test multiple treatment guidelines for cardiovascular disease (CVD) prevention. Our methodology uses the EM algorithm to fit sparse health data and a discrete-time Monte-Carlo simulation model to test guidelines for different patient demographics. Our results suggest that, among published guidelines, those focusing on reducing CVD risk can reduce treatment without increasing the risk of severe health outcomes. A Parallel Algorithm to Execute DEVS Simulations in Shared Memory Architectures Guillermo G. Trabes (Carleton University) Abstract Abstract As the Discrete Event Systems Specification (DEVS) formalism becomes more popular and used in more fields of application, simulations are more complex and time consuming. For this reason, we need to develop new ideas to execute them efficiently. In this work we propose a parallel algorithm to execute DEVS simulations in shared memory architectures. Our approach guarantees a simple and error free parallel execution without adding much overhead. In addition, we perform an experimental evaluation showing how we can accelerate the execution several times faster with our approach. Estimating the Effectiveness of Non-pharmaceutical Interventions in Heterogeneous Populations During an Emerging Infectious Disease Epidemic Johannes Ponge (University of Münster) Abstract Abstract Non-pharmaceutical interventions (NPIs) such as quarantining or school-closures are immediate measures to contain the diffusion of an emerging infectious disease in a susceptible population in absence of therapeutics or vaccinations. However, the effectiveness of containment measures strongly depends on regionally heterogeneous demographic structures (imagine school-closures which yield limited contributions to disease containment in areas with little to no school-age population). In my work, I present a pathogen-generic agent-based simulation approach to produce regional estimations for the effectiveness of NPIs. The thesis consists of three blocks. First, an approach to generate realistic synthetic populations based on publicly available census data. Second, a modular agent-based model architecture to enable simulations of various pathogens, populations, and NPIs. Third, a case study demonstrating the evaluation of regional NPI effectiveness in the context of the German COVID-19 epidemic. I suggest that my work will support the development of more sophisticated intervention strategies during emerging epidemics. Farming for Mining: Combining Data Farming and Data Mining to Gain Knowledge in Supply Chains Joachim Hunker (Technische Universität Dortmund) Abstract Abstract Knowledge discovery in databases (KDD) is a frequently used method in the context of supply chains (SC). The core phase of KDD is known under the term data mining. To gain knowledge, e.g., to support decisions in supply chain management (SCM), input data for KDD are necessary. Typically, such data consist of observational data, which have to be preprocessed before the data mining phase. Besides relying on observational data, simulation can be used to generate data as an input for the process of knowledge discovery. The process of using a simulation model as a data generator is called data farming. To link both data farming and KDD, a Farming-for-Mining Framework has been developed, where the data farming process generates data as an input for the KDD process to support decisions in SCM. Methodological Improvements Of Online Pandemic Simulation For Short-term Healthcare Resource Prediction Daniel Garcia-Vicuña (Public University of Navarre) Abstract Abstract Short-term hospital resource prediction is critical during pandemic healthcare crises. Simulation models are able to mimic the dynamics of a hospital during pandemic waves and to be used as short-term hospital resource prediction tools. However, the developed simulation models have to focus on the transition period of the health system rather than the stationary state as it is usual in simulation studies. In this presentation, we relate methodological challenges faced to develop data-driven simulation models that account for the variability and uncertainty of the pandemic evolution. In particular, we focus on the proposal of new estimators of the probability of admission to the Intensive Care Unit (ICU) and the length of stay of patients in the regular ward and in the ICU. The simulation models were used daily during the COVID-19 pandemic waves by the Spanish health administrations. Panel · Plenary [Virtual] What follows after tenure? Chair: Cristina Ruiz-Martín (Carleton University) What Follows After Tenure? Cristina Ruiz Martin (Carleton University) Abstract Abstract When one starts their career as professor, the first thing they need to achieve is tenure. After 3-5years, once tenure is achieved, we still have many years in our career before retirement. Sometimes, it is not easy to decide how to move forward. For example, in many universities, in year 6, professors have their first sabbatical. It is not always easy to decide how to address this milestone. What should be included in the sabbatical plan? What are the common activities that are done during sabbaticals? How do we build connections to get invited to an institution for our sabbatical? At the same time, we also face other challenges that are critical for promotion: How do we build international collaborations? How to participate in multidisciplinary international projects? In this panel, professors, in different career stages and from different countries, will share their own experience and how they addressed these challenges. Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [Virtual] Complexity Management Chair: Saurabh Mittal (MITRE Corporation) Simulation-Supported Engineering of Self-Adaptive Software Systems Tom Meyer, Andreas Ruscheinski, Pia Wilsdorf, and Adelinde M. Uhrmacher (University of Rostock) Abstract Abstract Engineering a self-adaptive software system is challenging. During design- as well as run-time, assurance cases are central for ensuring reliable operation of the software. Simulation, in addition to software verification and testing, is a viable means to provide evidence for assurance cases. So far, little attention has been given to the development of underlying simulation models. Here, we argue that a systematic approach to develop simulation models will enhance the overall engineering process and will contribute to seamless integration of simulation and engineering processes. In our approach, we relate an explicit representation of the conceptual model and simulation experiments to artifacts of the engineering process. We will show first steps of applying our approach in a concrete ongoing software project for medical diagnosis, and discuss the role of components of the conceptual model in designing the software as a self-adaptive software system. Managing the Complexity of Collaborative Simulation-Based Human-in-the-Loop Experimentation David Prochnow and Robert Portigue (MITRE Corporation) Abstract Abstract Human-in-the-Loop (HITL) experimentation can be effective for assessing the efficacy of multi-person operations. Based on the experiment objectives of such operations, an experiment team creates a virtual experimentation environment in which multiple human subjects can collaborate to achieve mission objectives. Afterwards, analysts assess the effectiveness of new technologies or procedures under test. While there is much value in conducting simulation-based HITL experimentation, there is also a large degree of complexity. This paper presents a framework for managing the complexity of executing such an experiment by dividing the experiment team into several smaller specialized teams that collaborate using processes described in this paper. An innovation leadership team, scenario team, technical team, and Data Collection and Analysis (DCA) team work together to plan and execute the experiment and assess results. An experiment team can use the methodology presented here to manage complexity and, ultimately, accomplish the objectives of collaborative simulation-based HITL experiments. Cyber (Re-)Insurance Policy Writing is NP-Hard in IoT Societies Ranjan Pal, Taoan Lu, and Peihan Liu (University of Michigan) and Xinlong Yin (Georgia Institute of Technology) Abstract Abstract The last decade has witnessed steadily growing markets for cyber (re-)insurance products to mitigate residual cyber-risk. In this introductory effort, we prove that underwriting simple cyber re-insurance policies can be worst case computationally hard, i.e., NP-Hard, especially for upcoming IoT societies. More specifically, let alone human underwriters, even a computer cannot compute an optimal cyber re-insurance policy in a reasonable amount of time in worst case scenarios. Here, optimality of a contract is judged based on the extent of information asymmetry induced negative externalities it mitigates between a re-insurance seller and a buyer. Our result does not challenge the existence of cyber re-insurance markets that we feel will be a necessity in the IoT age, but only rationalizes why their growth might be slow, and would subsequently need regulatory intervention. As a direct applicability of our methodology, we argue that optimal traditional cyber-insurance underwriting in IoT societies is also NP-Hard. Room Match: Achieving Thermal Comfort Through Smart Space Allocation and Environmental Control in Buildings Min Deng, Bo Fu, and Carol C. Menassa (University of Michigan) Abstract Abstract The thermal comfort of individuals is considered an important factor that affects the health, well-being, and productivity of the occupants. However, only a small proportion of people are satisfied with the thermal environment of their current workplace. Therefore, this paper proposes a novel framework to simulate and optimize thermal comfort by controlling room conditions and matching them with occupants. The method is developed based on personalized thermal comfort prediction models and the Large Neighborhood Search (LNS) algorithm. To illustrate and validate the algorithm, a case study is provided. The results compare the thermal comfort of the occupants before and after the optimization and show a significant improvement in the thermal comfort. The proposed simulation method is proven to be feasible and efficient in providing an optimal match of occupants and rooms with specific settings, and therefore, can be of great value for the decision-making of the building management. Invited Paper, Contributed Paper, Technical Session · MASM: Semiconductor Manufacturing [Virtual] Factory Operations 1 Chair: Abdelgafar Hamed (Infineon Technologies AG) Backward Simulation for Production Planning - Recent Advances in a Real-world Use-case Christoph Laroque and Madlene Leissau (University of Applied Sciences Zwickau) and Wolfgang Scholl and Germar Schneider (Infineon Technologies Dresden GmbH) Abstract Abstract The focus on customer orientation as well as on-time production and delivery portray the competitive environment for manufacturing companies in the semiconductor industry. Customer-specific products must be manufactured due to specified lead times and according to promised delivery dates. In this context, questions as to whether the production program is feasible and if all previously promised delivery dates will be met are often answered with backward-oriented planning approaches without taking into consideration any uncertainty or alternatives, that arise during operations. Regarding complex manufacturing systems (here semiconductor with re-entry cycles), these questions can be answered in a more detailed and robust by a discrete event-based simulation (DES) approach used in a backward-oriented manner. Research results show that the taken approach can be applied successfully for the scheduling of customer-specific orders in a real-world setting. Towards a Generic Semiconductor Manufacturing Simulation Model Abdelhak KHEMIRI, Claude YUGMA, and Stéphane DAUZERE-PERES (Ecole des Mines de Saint-Etienne) Abstract Abstract Simulation is one of the most used approaches to analyze semiconductor manufacturing systems. However, most simulation models are designed for single-use applications and study a limited set of problems that are not reusable afterwards. Recently, in order to overcome this problem, the idea to develop a generic wafer fab simulation model has emerged. Nonetheless, few papers address the development of a generic wafer fab simulation. This paper proposes a generic, data-driven simulation model to evaluate and analyze a wide range of problems arising in modern semiconductor manufacturing systems. We discuss the issues related to the genericity of such a simulation model and the data and semantic integration issues faced by users. Simulating and evaluating supply chain disruptions along an end-to-end semiconductor automotive supply chain Jaenichen Maximilian, Abdelgafar Ismail, Volker Dörrsam, and Christian James Martens (Infineon Technologies AG); Christina Johanna Liepold (TU Munich); and Hans Ehm (Infineon Technologies AG) Abstract Abstract The COVID-19 pandemic is an unprecedented public health and economic crisis. It dramatically impacted different industries, and presented an unforeseen challenge to the automotive industry and its supply chain (SC). We model a system dynamics simulation to demonstrate the behavior of a multi-echelon SC responding to different end market scenarios. The model results highlight challenges that arise for a semiconductor automotive SC not only during, but also after a disruption like the COVID-19 pandemic: strong demand dynamics which cause substantial operational consequences. The model evaluates how upstream companies in the automotive SC suffer from the disruption in terms of amplitude and duration. In order to mitigate these challenges, a close collaboration among players in the SC can increase robustness of the overall SC during unforeseen events like the pandemic. Technical Session · Simulation Optimization [Virtual] Gradient-Based Optimization Chair: Soumyadip Ghosh (IBM T. J. Watson Research Center) Stochastic Approximation with Gaussian Process Regression Yingcui Yan, Haihui Shen, and Zhibin Jiang (Shanghai Jiao Tong University) Abstract Abstract Stochastic approximation (SA) is attractive due to its curse-of-dimensionality-free convergence rate, but its finite-sample performance is not always satisfying. In addition to improving SA solely, it is also a promising direction to combine SA with other simulation optimization methods together for better performance. In this paper we propose to integrate the original SA with Gaussian process (GP) regression, and call this algorithm SAwGP. The GP regression serves as a surrogate model and it uses all the past sampling information to guide the SA iteration, which tends to be beneficial especially in the early stage. We theoretically prove that integrating the surrogate model does not ruin the local convergence of SA, and numerically demonstrate that the finite-sample performance of SAwGP is better than the original SA while the rate of convergence does not deteriorate and is even enhanced. Sensitivity Analysis and Time-cost Tradeoffs in Stochastic Activity Networks Peng Wan, Michael Fu, and Steve Marcus (The University of Maryland at College Park) Abstract Abstract Using Monte Carlo simulation, this paper proposes a new estimator for the gradient of the first moment of project completion time. Combining the new stochastic gradient estimator with a Taylor series approximation, a functional estimation procedure for activity criticality and expected project completion time is proposed and applied to optimization problems involving time-cost tradeoffs. On Solving Distributionally Robust Optimization Formulations Efficiently Soumyadip Ghosh and Mark S. Squillante (IBM Research) and Ebisa Wollega (Colorado State University) Abstract Abstract In this paper we propose and investigate a new stochastic gradient descent (SGD) algorithm to efficiently solve distributionally robust optimization (DRO) formulations that arise across a wide range of applications. Our approach for the min-max formulations of DRO applies SGD to the outer minimization problem. Towards this end, the gradient of the inner maximization is estimated by a sample average approximation using a subset of the data in each iteration, where the subset size is progressively increased over iterations to ensure convergence. We rigorously establish convergence of our method for a broad class of models. For strongly convex models, we also determine the optimal support-size growth sequence that balances a fundamental tradeoff between stochastic error and computational effort. Empirical results demonstrate the significant benefits of our approach over previous work in solving these DRO formulations efficiently. Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Effectiveness of Interventions against the Spread of Covid-19 Chair: Erik Rosenstrom (NC State) City-scale Simulation of COVID-19 Pandemic and Intervention Policies using Agent-based Modeling Gaurav Suryawanshi, Varun Madhavan, Adway Mitra, and Partha Pratim Chakrabarti (IIT Kharagpur) Abstract Abstract During the Covid-19 pandemic, most governments across the world imposed policies like lock-down of public spaces and restrictions on people's movements to minimize the spread of the virus through physical contact. However, such policies have grave social and economic costs, and so it is important to pre-assess their impacts. In this work we aim to visualize the dynamics of the pandemic in a city under different intervention policies, by simulating the behavior of the residents. We develop a very detailed agent-based model for a city, including its residents, physical and social spaces like homes, marketplaces, workplaces, schools/colleges etc. We parameterize our model for Kolkata city in India using ward-level demographic and civic data. We demonstrate that under appropriate choice of parameters, our model is able to reproduce the observed dynamics of the Covid-19 pandemic in Kolkata, and also indicate the counter-factual outcomes of alternative intervention policies. High Quality Masks Reduce Covid-19 Infections and Death in the US Erik Rosenstrom, Julie Swann, Julie Ivy, and Maria Mayorga (NC State University); Buse Eylul Oruc and Pinar Keskinocak (Georgia Institute of Technology); and Nathaniel Hupert (Cornell University) Abstract Abstract The objective is to evaluate the widespread adoption of masks on community transmission of SARS-CoV2. We employed an agent-based stochastic network simulation model and a variant of a SEIR disease model with one million agents in census tracts representing a population of 10.5 million. We evaluated scenarios with 25% to 90% mask-related reduction in viral transmission (mask efficacy). An individual wears a mask with a discrete probability values in [0-100%] (mask adherence). A mask order was initiated 3.5 months after the first confirmed case, with temporary state-wide distancing and voluntary quarantining of households. If 50% of the population wears masks that are 50% effective, this decreases the cumulative infection attack rate (CAR) by 27%, the peak prevalence by 54%, and the population mortality by 29%. If 90% wear masks that are 50% effective, this decreases the CAR by 38%, the peak prevalence by 75%, and the population mortality by 55%. Invited Paper, Contributed Paper, Technical Session · Project Management and Construction [Virtual] Simulation for Emergency Planning and Response Chair: Wenying Ji (George Mason University) Public Demand Estimation following Disasters through Integrating Social Media and Community Demographics Yudi Chen and Wenying Ji (George Mason University) Abstract Abstract Following disasters, a timely and reliable estimation of public demand—the number of individuals having demand—is essential for allocating relief resources properly. However, such an estimation is challenging as public demand varies significantly in dynamic disasters. To address the challenge, this research estimates public demand through proposing a data-driven approach that integrates social media and community demographics. In detail, social media is used to derive the percentage of a population having demand, and demographics are applied to normalize population differences amongst races/ethnicities. The proposed approach is capable of (1) eliminating the social media bias caused by racial disparities on social media platforms, and (2) modeling the uncertainty of social media-derived demand percentage. Hurricane Irma-induced food demand in Florida is studied to prove the feasibility of the proposed data-driven approach. In addition, the research sheds light on the use of partial information for deriving insights for the entire population. Creating an Inter-hospital Resilient Network for Pandemic Response Based on Blockchain and Dynamic Digital Twins Qiuchen Lu (University College London); Xiang Xie (University of Cambridge); Long Chen (Loughborough University); and Zhen Ye, Zigeng Fang, Jiayin Meng, Michael Pitt, and Jinyi Lin (University College London) Abstract Abstract Developing and using the rich data implied by dynamic digital twins and blockchain is relevant to manage both patients and medical resources (e.g., doctors/nurses, PPE, beds and ventilators etc.) at the COVID-19 and post COVID period. This paper learns from the experiences of resources deployment/redeployment and pandemic response from UK Hospitals to explore the blockchain solutions for preparing healthcare systems ready for both efficient operation daily and in pandemic thorough (1) information integration of patient (privacy protected) flow and medical resource flow from healthcare and medical records; (2) optimizing the deployment of such resources based on regions and local pandemic levels switching from normal to the outbreak. The main idea is to develop the novel framework for creating an inter-hospital resilient network for pandemic response based on blockchain and dynamic digital twin, which will set up innovative ways to best care for patients, protect NHS, and support government scientific decisions. Machine Learning and Simulation-Based Framework for Disaster Preparedness Prediction Best Contributed Theoretical Paper - Finalist Zhenlong Jiang, Ran Ji, Yudi Chen, and Wenying Ji (George Mason University) Abstract Abstract Sufficient preparedness is essential to community resilience following natural disasters. Understanding disaster preparedness of residents in the affected area improves the efficiency and equity of relief operations. This research aims to develop a machine learning and simulation-based approach to predict disaster preparedness using various demographic features from multisource data. The proposed approach comprises four steps: (1) collecting and integrating various data sources, including the FEMA National Household Survey data, US census data, and county-level disaster declaration data; (2) training multiple classification models with the prepared data set and selecting the model with best prediction performance; (3) simulating resident demographic features for at the county level; (4) predicting disaster preparedness status with simulated data for a selected county. A case study is presented to demonstrate the reliability and applicability of the proposed framework. Technical Session · Poster Session Poster Session Chair: María Julia Blas (INGAR CONICET UTN) DES For The Efficient Design Of A Jackets Nodes Workshop Using Disruptive Technologies Adolfo Lamas Rodríguez (NAVANTIA), Belén Sañudo Costoya (UMI Navantia - UDC), Inés Taracido López (NAVANTIA), Santiago José Tutor Roca (UMI Navantia-UDC), and Clara Varea Delgado (NAVANTIA) Abstract Abstract In this study, Discrete Event Simulation (DES) has been used for the development of a jackets nodes workshop, located at the NAVANTIA - Fene shipyard facilities. Specifically, three plant layouts were analyzed, in which different equipment’s technology improvements were used (from the least innovative and disruptive to the most). In this way, we were able to obtain the most favorable workshop design while minimizing the technological investment needed to meet the pre-set project takt-time. With this methodology, the company can verify that the project investment is profitable and minimizes the probability of missing due dates in jacket projects. Discrete Event Simulation for Evaluation of the Performance of a Covid-19 Vaccine Clinic S. Yasaman Ahmadi and Jennifer Lather (University of Nebraska-Lincoln) Abstract Abstract Mass vaccination is one strategy for controlling the spread of SARS-CoV-2 resulting in COVID-19 and for quick deployment of vaccines in places with high transmission rates. Gathering people for vaccination against infectious respiratory diseases in a building can cause concerns regarding safety and public health. Therefore, designing safe vaccine clinics is a priority in current and future pandemics. Designing the COVID-19 vaccine clinic is different from traditional clinics. In this study, discrete event simulation is used to simulate a COVID-19 vaccine clinic to study the balance of resources needed for several sizes of clinics. We assess the time in system (TIS), number in the system (NIS), the maximum number in waiting queues, maximum waiting time in queues, scheduled utilization, alongside considering the COVID-19 safety precautions such as social distancing. This study aims to support health administrators to plan the operation of COVID-19 vaccine clinics efficiently beyond current planning methods. Reinforcement Learning With Discrete Event Simulation: The Premise, Reality, And Promise. Sahil Belsare and Mohammad Dehghanimohammadabadi (Northeastern University) Abstract Abstract Recently, Reinforcement Learning (RL) has been successfully applied in domains like manufacturing, supply chain, health care, finance, robotics, and autonomous vehicles. For such applications, uncertainty in the real-life environment presents a significant challenge in training an RL agent. One of the approaches to tackle this obstacle is by augmenting RL with a Discrete Event Simulation (DES) model. This framework enables us to accommodate the list of possible circumstances occurring in the practical environment. In this presentation, we analyze the existing literature of RL models using DES to put forward the benefits, application areas, challenges, and scope for future work in developing such models for industrial optimization. Streamlining The United States Immigration Court System: Using Simulation and Data Science to Effectively Deploy Capacity Geri Dimas, Adam Ferrarotti, Renata A. Konrad, and Andrew C. Trapp (Worcester Polytechnic Institute) Abstract Abstract There is a significant and growing backlog in the United States immigration court system, with over a million cases waiting to be heard. The backlog is particularly challenging to manage due to large influxes of migrants in recent years, coupled with antiquated design and resource limitations. This influx is causing delays that unnecessarily tax government and community resources while putting many lives on hold. We explore modeling the intricacies of the immigration court system, reconstructing its various elements and their respective complexity through discrete event simulation and machine learning. We study possible improvements to the simulated system affecting capacity, such as the number of judges, queueing discipline, and alternative ways to distribute available capacity. On The Hausdorff Distance Between A Pareto Set And Its Discretization Burla Ondes and Susan Hunter (Purdue University) Abstract Abstract Our broad goal is to derive bounds on the performance of bi-objective simulation optimization algorithms that seek the global efficient set on a compact feasible set. Toward this end, we bound the expected Hausdorff distance from the true efficient set to the estimated discretized efficient set by the sum of deterministic and stochastic error terms. We provide an upper bound on the deterministic error term in the context of bi-objective convex quadratic optimization with spherical level sets. Our bound implies that if t is the dispersion of the observed points measured in the decision space, then this Hausdorff distance between the Pareto set and its discretization is O(sqrt(t)) as t decreases to zero. Developing a Driving Model for Workload Evaluation Josalin Kumm (University of Wisconsin-Madison) and Holly Handley and Yusuke Yamani (Old Dominion University) Abstract Abstract Driving simulation provides a platform that allows researchers to investigate driving behaviors in a controlled environment. Distracted driving occurs when a driver engages in a driving-unrelated secondary task that detracts their attention from the roadway and the driving task. This study compares driver workload using simulation models as a surrogate for driver distraction. Data were obtained from a study where drivers navigated in a simulated world with varying levels of workload manipulated in the n-back task. The results of the two simulation models are compared to the human subject data. Automation in the Process of Knowledge Discovery in Simulation Data Jonas Genath (Technische Universität Ilmenau) Abstract Abstract In contrast to classical simulation studies, the method of knowledge discovery in simulation data uses a simulation model as a data generator (data farming). Subsequently using data mining methods, hidden, previously unknown and potentially useful cause-effect relationships can be uncovered. So far, however, there is a lack of support and automatization tools for non-experts or novices in knowledge discovery in simulation data, which leads to a more difficult use in industrial applications and prevents a broader utilization. In this work, we propose a concept for automating and supporting the knowledge discovery in simulation data process. Discrete Event Simulation of Smart Parking Conflict Management antoine dominici (SPE UMR CNRS 6134, Townhall of Bastia) and Laurent Capocchi, Emmanuelle De Gentili, and Jean-François Santucci (SPE UMR CNRS 6134) Abstract Abstract Smart parking is a framework aimed to optimize the occupancy of parking spots based on specificationsthat include the behavior of drivers. One of the challenges in this area concerns the determination of areliable model able to resolve cumulative parking conflicts that appear when many drivers look for parkingin a dynamic environment system where user behavior is paramount. This abstract presents a discrete-event modeling and simulation approach dedicated to propose conflict management strategies based on theestimated travel time to reach desired places around a specific area. Incorporating Knowledge Discovery Technology in Micro-Dynamic Analysis Method Shuang Chang, Hiroaki Yamada, Shohei Yamane, and Kotaro Ohori (Fujitsu Ltd.) Abstract Abstract An agent-based modelling approach is powerful in modelling individual behaviors and social interactions to investigate the resulted social phenomena. Yet this advancement in modelling poses challenges in the analysis process, which is often complicated due to the large volume of simulation logs generated, and the combined effects of input factors. In this paper, we propose a revised micro-dynamic analysis method by adopting a knowledge discovery technology to identify influential combinations of factors causing a target phenomenon and to improve the interpretability of results. We apply this method to the simulation logs generated from an agent-based model which investigated the impacts of group-based learning on cooperative behaviors. It is demonstrated that the method can examine thoroughly the combined effects of input variables from both micro- and meso- perspectives simultaneously not depending on modelers’ analysis skills, and imply policies from a different perspective apart from the original analysis. Mathematical Modeling of Novel Coronavirus (Sars-Cov-2) Infection in Dialysis Facilities Michal Mankowski (KAUST) and Abdulrahman Housawi and Shahzada Junaid Qazi (Ministry of Health, Kingdom of Saudi Arabia) Abstract Abstract The dialysis patients likely carry an additional risk for contracting (and spreading) the infection to others as they continue necessary treatment at dialysis facilities. They visit dialysis facilities multiple times a week, regardless of curfew and lockdown regimes. In addition, dialysis patients are at a heightened risk for developing complications due to their inherently compromised immune systems. The aim of this simulation study was to evaluate and model the practice patterns, relevant scheduling, and testing scenarios applied to limit the spread of infections among the dialysis patient population along with the associated staff. Using agent-based simulation, we evaluated the attack rate among the targeted population. The simulation proved the importance of applying countermeasures as well as intense testing scenarios. Our study found that using antigens testing may perform better in reducing a spread in weekly and fortnightly time intervals than PCR testing. Optimizing Warehouse Picking Operations Using Autonomous Mobile Robots: a Simulation Approach Vipul Garg and Jacob Maywald (University of North Texas) Abstract Abstract Order picking is a labor-intensive operation and contributes significantly to overall operational costs in a warehouse. While prior literature suggests collaborative technologies, including autonomous mobile robots (AMRs), can increase picking efficiency, few studies have attempted to find an optimal ratio between AMR and physical pickers in using a free-floating policy. This research develops a simulation model to evaluate the performance of order picking with varying numbers of AMRs and pickers, taking into account both traditional and cross-aisled warehouse layouts. An experiment comparing runs of 48 total scenarios was conducted in a simulated environment. The results suggest that operational efficiency peaks at around a 2:1 AMR to picker ratio. Furthermore, the addition of a cross-aisle provided an increase of up to 6% picking efficiency (2% on average). Simulating The Active Cases and Hospitalization Cases of COVID-19 Shaon Bhatta Shuvo and Ziad Kobti (University of Windsor) Abstract Abstract Simulating and predicting the active COVID-19 cases and hospitalization cases in advance can be helpful to minimize the catastrophe of this persistent outbreak. This study proposed a novel Agent-based modelling (ABM) framework based on various temporal and non-pharmaceuticals parameters to predict active cases and hospitalization cases. We evaluated the model's performance based on COVID-19 data of Windsor-Essex county region of Ontario, Canada, and achieved satisfactory results in predicting active cases and hospitalization cases. Experimental results have demonstrated that the simulations provide helpful information that could help take advanced steps to cover up for the shortage in hospital resources and take necessary steps to reduce the number of infections. A Framework for Supporting Simulation of MBSE Models Nicholas Engle and Michael Miller (Air Force Institute of Technology) Abstract Abstract Application of Model-Based Systems Engineering (MBSE) requires the integration of robust tools to provide system description and dynamic model execution. MBSE tools, such as Cameo Systems Modeler facilitate robust system descriptions. While these tools often have built-in simulation capabilities, they are often limited in scope and lack features for robust modeling of system behavior and performance. We present a prototype toolkit which extends SimPy, an open-source Python-based Discrete-Event Simulation library. Using custom SysML stereotypes for defining object, environment, and activity properties, our prototype extracts information from Cameo model files to build and run a graph-based Python discrete event simulation. Digital Twin Based Cyber-Attack Detection For Manufacturing Systems Chandrasekhar Sivasubramanian, Giulia Pedrielli, Petar Jevtic, and Georgios Fainekos (Arizona State University) and Mani Janakiram (Intel Corporation) Abstract Abstract Breakthrough paradigms and technologies are up and coming across several sectors in manufacturing and semiconductor, as high-tech manufacturing is witnessing unprecedented opportunities. In fact, next generation fabs are Cyber-Physical Systems (CPS) which integrate the physical and the information layer using networks, sensors and data processing. In these connected systems, there are several interactions between the equipment and the information layer (Cloud storage). With unprecedented opportunities came unprecedented challenges. In this work, we focus on cyber-attacks in semiconductor smart manufacturing. While cyber-attacks have been formulated and analyzed for several critical infrastructures (power, water, gas, etc.) the development in the manufacturing sector in general and semiconductor in particular, are still nascent. In this paper, we formulize new attack categories, provide ways to deploy them and alternative ways to detect them. A preliminary empirical analysis is provided for a lithography system. Simulating Truck Fleet Configuration for Wood Terminals Christoph Kogler, Alexander Stenitzer, and Peter Rauch (University of Natural Resources and Life Sciences, Vienna) Abstract Abstract The alarming bottleneck of self-loading truck capacity after forest calamities challenges resilient wood transport in leading countries of the wood-based industry. Consequently, a discrete event simulation model of a multi echelon unimodal wood supply chain, spanning from self-loading truck pickup at forest landings to wood transshipment at terminals and final semitrailer truck transport to industry, was developed to provide optimal truck fleet configurations for different terminal configurations. Varying transport distance, terminal utilization and truck payload scenarios provide valuable decision support to develop contingency planning strategies for various regions. Optimal results regarding the number of self-loading trucks, prime mover trucks and semitrailers deduced by full enumeration outperformed unimodal transport cost benchmarks for short, medium and long distances by 5.45%, 6.95% and 11.28%, respectively. In order to better manage increasingly frequent natural disturbances, future research should extent simulation models to include intermediate storage in wood stockyards and to consider wood value loss. Multi-objective Simulation Optimization Of The Adhesive Bonding Process Of Materials Alejandro Morales-Hernández, Inneke Van Nieuwenhuyse, and Sebastian Rojas Gonzalez (Hasselt University, Data Science Institute) and Jeroen Jordens, Maarten Witters, and Bart Van Doninck (Flanders Make) Abstract Abstract Automotive companies are increasingly looking for ways to make their products lighter, using novel materials and novel bonding processes to join these materials together. Finding the optimal process parameters for such adhesive bonding process is challenging. In this research, we successfully applied Bayesian optimization using Gaussian Process Regression and Logistic Regression, to efficiently (i.e., requiring few experiments) guide the design of experiments to the Pareto-optimal process parameter settings. Technical Session · Healthcare Applications [Virtual] Applications of Simulation in Healthcare I Chair: F. LeRon Shults (University of Agder) Using Discrete Event Simulation to Improve Performance at Two Canadian Emergency Departments Evgueniia Doudareva (University of Toronto) Abstract Abstract Emergency Departments’ (EDs) critical role in patient care and their complex process flow contribute to them being one of the most frequently modeled systems in healthcare Operations Research (OR). The goal of this research was to develop models of two EDs that could diagnose bottlenecks and evaluate performance improvement approaches. We used Discrete Event Simulation (DES) to model two EDs in Toronto, Canada, based on existing processes and empirical data. Model outputs include wait times, treatment times, and selected process durations. Management of both EDs used the models to evaluate performance and preview the effects of staffing and flow changes before committing to the improvement measures. The examples of successful performance improvement opportunities include a new triage flow for patients arriving by ambulance, merging of the treatment zones, and increases in staffing levels. Artificial Societies in the Anthropocene F. LeRon Shults (University of Agder), Wesley J. Wildman (Boston University), Monica Toft (Tufts University), and Antje Danielson (MIT) Abstract Abstract Computational approaches to climate modeling have advanced rapidly in recent years, as have the tools and techniques associated with the construction of artificial life, artificial societies, and social simulation experiments. However, the use of computer simulation to study the effects of climate change on human conflict and cooperation is still relatively rare. We consider some of the challenges and opportunities that face interdisciplinary teams seeking to develop models that incorporate human, ecological, and natural systems. There is an urgency to this task because climate-abetted socio-economic stress can trigger conflict at all scales, which exacerbates human suffering. We argue that the interdisciplinary community of scholars with expertise in social simulation and artificial life have a unique opportunity to collaborate and address these challenges by developing artificial societies capable of uncovering adaptive pathways that can minimize social conflict and maximize cooperation in the face of climate-abetted social and ecological change. Panel [Virtual] Panel Session: Coffee with ... Chair: Cristina Ruiz-Martín (Carleton University) Coffee With ... Cristina Ruiz Martin (Carleton University) Abstract Abstract Did you ever wonder what is behind the scenes when organizing a conference? Did you always want to approach some of the conference organizers to ask these questions, but you did not have enough courage? Do you want some tips on approaching other participants (e.g., senior professor) during conferences? If the answer to any of these questions is "yes", this is your panel. Different members of the organizing committee will be here to chat with you and answer those questions you never had the chance to ask. Invited Paper · Hybrid Simulation [Virtual] Hybrid Simulation Modeling Chair: Nurul Izrin Md Saleh (Universiti Malaysia Kelantan) Hybrid Conceptual Modeling for Simulation: An Ontology Approach during COVID-19 Nurul Izrin Md Saleh (Universiti Malaysia Kelantan), David Bell (Brunel University London), and Zuharabih Sulaiman (University of Malaya) Abstract Abstract The recent outbreak of Covid-19 caused by SARS-CoV-2 infection that started in Wuhan, China, has quickly spread worldwide. Due to the aggressive number of cases, the entire healthcare system has to respond and make decisions promptly to ensure it does not fail. Researchers have investigated the integration between ontology, algorithms and process modeling to facilitate simulation modeling in emergency departments and have produced a Minimal-Viable Simulation Ontology (MVSimO). However, the ``minimalism'' of the ontology has yet to be explored to cover pandemic settings. Responding to this, modelers must redesign services that are Covid-19 safe and better reflect changing realities. This study proposes a novel method that conceptualizes processes within the domain from a Discrete-Event Simulation (DES) perspective and utilizes prediction data from an Agent-Based Simulation (ABS) model to improve the accuracy of existing models. This hybrid approach can be helpful to support local decision making around resources allocation. Systemic Characteristics to Support Hybrid Simulation Modeling Tillal Eldabi (University of Surrey, Surrey Business School) Abstract Abstract Hybrid simulation (HS) is a modeling approach based on combining System Dynamics, Discrete Event Simulation, and/or Agent Based Simulation into a single model. There have been many benefits identified for utilization HS for planning and decision-making across many sectors. However, the lead time and skills requirements for developing HS models is usually greater than single methodology models. This position paper proposes that in order to improve and speed up the development of HS models, the decision to hybridize should be taken at the earliest possible point, i.e. when investigating the system and defining the problem. To this end, five system based characteristics have been proposed as decision points that help modelers to make such decisions. The paper concludes by suggesting a number of research avenues to follow for further improvements, whilst highlighting further challenges related to the availability of skills and tools for developing HS models. Technical Session · Project Management and Construction [Virtual] Simulation for Built Environment and Construction Logistics Chair: Khandakar Rashid (Arizona State University) Evaluating Supply- And Reverse Logistics Alternatives in Building Construction Using Simulation Christina Gschwendtner and Anne Fischer (Technical University Munich), Iris Tommelein (University of California), and Johannes Fottner (Technical University Munich) Abstract Abstract Building construction is a one-of-a kind production with complexity caused by interdependencies between logistical- and construction-operation processes. Especially the finishing phase involves numerous trades who install a wide variety of materials and have to share logistical resources. Traditional planning approaches based on analytical tools and experience gained from historical data fall short in providing the support needed to evaluate different logistical strategies. Therefore, the present work aims at developing a user-friendly simulation model that considers both material supply and waste removal through a third-party logistics partner (TPLP). The process interdependencies are investigated using data from a real hotel building project. A kitting solution is evaluated as an alternative, using consolidation centers to reduce material handling on site. Comparing the supply- and reverse logistics of these alternatives, kitting appears to be more costly however the model does not capture potential improvements in performance of construction operations and coordination of processes. Agent-based Simulation to Predict Occupants' Physical Distancing Behaviors in Educational Buildings Bogyeong Lee (Dankook University) and Changbum Ryan Ahn (Texas A&M University) Abstract Abstract With a long-term global pandemic impact, it is strongly recommended to secure the physical distance in daily lives. Physical-distancing is the most efficient strategy to defend individuals by lowering the risk of community spreading. Policies of securing physical distances are strongly considered for indoor spaces are essential (i.e., office, school, store). We develop an agent-based simulation model to test the physical-distancing policies (i.e., controlling the capacity of occupants, breaktime between classes, occupants' behavior tendency to secure the distances) based on multiple layers of occupancy behaviors that are possibly occurred in the educational building. The model measures and compares the possible risk that occupants experience from physical distance violation for each policy. The results show the impacts of capacity, break time duration, and internal intent to secure the physical distance on the risks of violating the physical distance. This study can contribute to designing the physical-distancing policies for the educational building. Automated Active And Idle Time Measurement In Modular Construction Factory Using Inertial Measurement Unit and Deep Learning For Dynamic Simulation Input Khandakar Mamunur Rashid and Joseph Louis (Oregon State University) Abstract Abstract Modular construction is gaining popularity in the USA for several advantages over stick-built methods in terms of reduced waste and time. However, productivity monitoring is an essential part to utilize the full potential of modular construction methods. This paper proposes a framework to automatically measure active and idle time at various workstations in modular construction factories, which essentially dictates the efficiency of the production. This cycle time information can be used as inputs for dynamic prediction using simulation modeling. Vibration data were collected from workstations using inertial measurement units (IMUs), and a deep learning network was used to extract active and idle time from the vibration data. The result of this study showed that the proposed methodology can automatically calculate the active and idle time at various workstations with a 2.7% average error. This presents the potential of utilizing sensors and AI with simulation modeling for production monitoring and control. Technical Session · Project Management and Construction [Virtual] Digital Twins in the Built Environment Chair: Fernanda Leite (The University of Texas at Austin) An Automated Framework For Generating Synthetic Point Clouds from As-built BIM with Semantic Annotation for Scan-to-BIM Jong Won Ma, Bing Han, and Fernanda Leite (The University of Texas at Austin) Abstract Abstract Data scarcity is a major constraint which hinders Scan-to-BIM’s generalizability in unseen environments. Manual data collection is not only time-consuming and laborious but especially achieving the 3D point clouds is in general very limited due to indoor environment characteristics. In addition, ground-truth information needs to be attached for the effective utilization of the achieved dataset which also requires considerable time and effort. To resolve these issues, this paper presents an automated framework which integrates the process of generating synthetic point clouds and semantic annotation from as-built BIMs. A procedure is demonstrated using commercially available software systems. The viability of the synthetic point clouds is investigated using a deep learning semantic segmentation algorithm by comparing its performance with real-world point clouds. Our proposed framework can potentially provide an opportunity to replace real-world data collection through the transformation of existing as-built BIMs into synthetic 3D point clouds. Explainable Modeling in Digital Twin Lu Wang and Tianhu Deng (Tsinghua University) and Zeyu Zheng and Zuo-Jun Shen (University of California, Berkeley) Abstract Abstract Stakeholders' participation in the modeling process is important to successful Digital Twin (DT) implementation. The key question in the modeling process is to decide which options to include. Explaining the key question clearly ensures the organizations and end-users know what the digital models in DT are capable of. To support successful DT implementation, we propose a framework of explainable modeling to enable the collaboration and interaction between modelers and stakeholders. We formulate the modeling process mathematically and develop three types of automatically generated explanations to support understanding and build trust. We introduce three explainability scores to measure the value of explainable modeling. We illustrate how the proposed explainable modeling works by a case study on developing and implementing a DT factory. The explainable modeling increases communication efficiency and builds trust by clearly expressing the model competencies, answering key questions in modeling automatically, and enabling consistent understanding of the model. Seeing Through Walls: Real-Time Digital Twin Modeling of Indoor Spaces Best Contributed Applied Paper - Finalist Fang Xu, George Pu, Paul Wei, Amanda Aribe, James Boultinghouse, Nhi Dinh, and Jing Du (University of Florida) Abstract Abstract Systems that can augment human spatial senses in complex built environments has become increasingly important. Sensor-based 3D mapping and augmented reality (AR) techniques have been tested to provide enhanced visual assistance in indoor search and rescue, such as providing the ”seeing through walls” functions for occluded areas. There is still a need for evidence that shows how scene capture, 3D mapping, and real-time rendering and visualization can be executed in an integrative manner, enabling an instantaneous ”Digital Twins” modeling. This paper presents a robot-based system for mapping indoor environments and creating near real-time virtual replica of the physical space. The entire process is executed pseudo real-time to create a dynamically updated 3D map based on scanners and sensors carried by a quadrupedal robot. The map is then used to generate egocentric ”x-ray vision” views of the occludedobjects using AR headsets. A case study was performed in a lab room. Commercial Case Study · Commercial Case Studies [Virtual] Digital Twin Chair: David T. Sturrock (Simio LLC) Digital Twin Applications for Design and Operation of AGVs in Shop Floor Donggun Lee, Seunghyun Song, Chanhyeok Lee, and Sang Do Noh (Sungkyunkwan University) and Sangmun Yun and Hyeonyeong Lee (LG Electronics) Abstract Abstract With the introduction of Smart Manufacturing technology in recent years, manufacturers are introducing automated facilities for increasing operational efficiency, productivity, or becoming automated factories, and the representative of them is Automatic Guided Vehicle Systems (AGVs). When designing and operating these AGVs, there are difficulties that abnormal situations cannot be predicted and responded until the system is actually configured and operated. To solve these problems, it is necessary to preview and verify systems in a virtual environment in the design stage, and to introduce a Digital Twin (DT) that can monitor and analyze the configured systems in the operation stage. This paper proposes DT that can be used for diagnosis, analysis, prediction and optimization in the design and operation of AGVs and verify its effectiveness through applying real manufacturer which operates AGV system in South Korea. Digital twin-based applications for assembly production lines of global automotive part suppliers joohee Lym, Jinho Yang, Jonghwan Choi, and Sang Do Noh (Sungkyunkwan University); Sang Hyun Lee and Jeong Tae Kang (IT Division Yura Corporation); and Jungmin Song, Dae Yub Lee, and Hyung Sun Kim (Dexta Inc.) Abstract Abstract Nowadays, several manufacturing companies design, engineer, and produce their products through a globally distributed supply chain. In an increasingly complex and uncertain automotive manufacturing environment, it is essential to effectively manage distributed global supply chains to enhance the efficiency and responsiveness. Accordingly, These manufacturers make great efforts to efficiently operate manufacturing systems using smart manufacturing(SM) technology. However, it is difficult to collect real-time information and support rapid decision-making from distributed manufacturing sites with existing independent applications. To solve these problems, this study propose a digital twin(DT)-based application using real-time manufacturing data which collected from manufacturing site. In addition, the proposed application was verified through a case study. X-TEAM D2D: Modeling Future Smart and Seamless Travel in Europe Margarita Bagamanova and Miguel Mujica Mota (Amsterdam University of Applied Sciences) and Vittorio Di Vito (Italian Aerospace Research Center) Abstract Abstract Future transportation technologies will change the way passengers travel to their destinations. In Europe, there is an ambition to achieve door-to-door travel times of no longer than four hours by 2050. For this, air transport will need to be integrated into the overall multimodal transport network in a smart and efficient way. Inspired by this challenge, project X-TEAM D2D will develop a Concept of Operations for integrating Air Traffic Management and Urban Air Mobility into an overall multimodal transport network, considering the urban and extended urban environment up to a regional extent. This study presents the preliminary results of the expected performance of the intermodal transport network in the following decades. The results provide insight into the expected impact of future smart transport technologies on passengers' travel. Multi-Method Simulations For Large Fleet Autonomous Material Handling Robots Bonnie Yue (Clearpath Robotics) Abstract Abstract Deployment of large fleet Autonomous Mobile Robots (AMRs) for material handling have unique challenges that render a purely discrete event simulation approach insufficient in capturing AMR operations. In this case study, the use of discrete event, agent-based and physics-based simulations are used together to mimic factory operations, autonomous robot behavior as well as to understand robot path planning and navigation performance. Applications of the multi-method simulation approach are discussed, as it pertains to solving common real world problems of deploying large fleet AMRs such as fleet sizing, navigation feasibility, traffic issues, and understanding impacts of layout design. |
Plenary · Plenary Keynote: Beyond Data: Data-Driven Digital Twins for Sustainable Future Cities Chair: Margaret Loper (Georgia Tech Research Institute) Vendor · Vendor [In person] ProModel Corporation Simulation Solutions – Better Than Ever Plenary · Plenary Titan Talk: Andreas Tolk: When Smart People share Smart Methods to create Smart Cities: How M&S enables Transdisciplinary Solutions Chair: Margaret Loper (Georgia Tech Research Institute) Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [In person] M&S of Adaptive and Autonomous Systems Chair: Saurabh Mittal (MITRE Corporation) Development of a Reinforcement Learning-based Adaptive Scheduling Algorithm for Block Assembly Production Line Best Contributed Applied Paper - Finalist pdfPanel · Plenary [In person] Panel: Women in Simulation Chair: Margaret Loper (Georgia Tech Research Institute) Vendor · Vendor [In person] Overview of Arena 16.1 and Application of New Features Plenary · Plenary Titan Talk: David Nicol Challenges and Approaches to the Modeling and Simulation of Gargantuan Discrete Systems Chair: Margaret Loper (Georgia Tech Research Institute) Vendor · Vendor [In person] Simio’s new Neural Networks Features: An Iterative Process of Inference and Training Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Work Smarter, Not Harder: A Tutorial on Designing and Conducting Simulation Experiments Chair: Daniel García de Vicuña (Public University of Navarre) Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Multiple Streams with Recurrence-Based, Counter-Based, and Splittable Random Number Generators Chair: Giulia Pedrielli (Arizona State University) Technical Session, Introductory Tutorial · Commercial Case Studies [In person] Tested Success Tips For Simulation Project Excellence Chair: David T. Sturrock (Simio LLC) Technical Session, Introductory Tutorial · Introductory Tutorials [In person] Tutorial: Graphical Methods for the Design and Analysis of Experiments Chair: Kelsea B Best (Vanderbilt University) Technical Session · Environment and Sustainability Applications [In person] Environmental and Sustainability Applications 1 Chair: Suzanne DeLong (The MITRE Corporation) Technical Session · Modeling Methodology [In person] Modeling Methodologies 2 Chair: Gabriel Wainer (Carleton University) Exploiting Provenance and Ontologies in Supporting Best Practices for Simulation Experiments: A Case Study on Sensitivity Analysis pdfTechnical Session · Modeling Methodology [In person] Agent Based Modeling Chair: Pia Wilsdorf (University of Rostock) Technical Session, Introductory Tutorial · Introductory Tutorials [In person] A Gentle Introduction To Bayesian Optimization Chair: Daniel Otero-Leon (University of Michigan) Vendor [In person] Rockwell Automaton Vendor Workshop Technical Session · Logistics, Supply Chains, and Transportation [In person] Support Development of Control Systems Chair: Leon McGinnis (Georgia Institute of Technology) Designing and Implementing Operational Controllers for A Robotic Tote Consolidation Cell Simulation pdfScheduling and Controlling Multiple Vehicles on a Common Track in High-Powered Automated Vehicle Storage and Retrieval Systems pdfTechnical Session · Analysis Methodology [In person] Estimation Methodology 1 Chair: Shengyi He (Columbia University) Advanced Tutorial · Advanced Tutorials [In person] Reflections on Simulation Optimization Chair: Zeyu Zheng (Stanford University) Technical Session · Simulation Optimization [In Person] Applications & Related Methods Chair: Kimberly Holmgren (Georgia Institute of Technology) Technical Session · Simulation Optimization [In person] Global Search Methods Chair: Giulia Pedrielli (Arizona State University) Partitioning and Gaussian Processes for Accelerating Sampling in Monte Carlo Tree Search for Continuous Decisions pdfAdvanced Tutorial · Advanced Tutorials [In person] Thinking Inside the Box: A Tutorial on Grey-Box Bayesian Optimization Chair: Russell R. Barton (Pennsylvania State University) Advanced Tutorial · Advanced Tutorials [In person] A Tutorial on How to Connect Python with Different Simulation Software to Develop Rich Simheuristics Chair: Andrea D'Ambrogio (University of Roma TorVergata) Technical Session · Simulation Optimization [In person] Local & Gradient-Based Search Methods Chair: David Eckman (University of Pittsburgh) Improved Complexity of Trust-region Optimization for Zeroth-order Stochastic Oracles with Adaptive Sampling pdfInexact-Proximal Accelerated Gradient Method for Stochastic Nonconvex Constrained Optimization Problems pdfTechnical Session · Covid-19 and Epidemiological Simulations [In person] Case studies of COVID-19 impacts and interventions Chair: Miguel Mujica Mota (Amsterdam University of Applied Sciences) DeepABM: Scalable and Efficient Agent-Based Simulations via Geometric Learning Frameworks - A Case Study for COVID-19 Spread and Interventions pdfBPMN-Based Simulation Analysis of the COVID-19 Impact on Emergency Departments: a Case Study in Italy pdfTechnical Session · MASM: Semiconductor Manufacturing [In person] Production Planning for Wafer Fabs Chair: Lars Moench (University of Hagen) Predicting Cycle Time Distributions With Aggregate Modelling Of Work Areas In A Real-World Wafer Fab pdfTechnical Session · Healthcare Applications [In person] Long term planning for chronic diseases Chair: Priscille Koutouan (North Carolina State University) Using Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease pdfCreating Simulated Equivalents to Project Long-term Population Health Outcomes of Underserved Patients: an Application to Colorectal Cancer Screening pdfTechnical Session · Manufacturing Applications [In person] MA2 Chair: Ana Moreira (Auburn University) Applying Discrete-Event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant pdfPlenary · Plenary [In person] Military Keynote Chair: Nathaniel D. Bastian (United States Military Academy) Technical Session · Military and National Security Applications [In person] Rockets, Active Shooter Defeat System, and Violence Modeling Chair: Andrew Hall (Marymount University, Institute for Defense Analysis) Technical Session · Simulation and Philosophy [In person] Real World Ethical Implications for Analysis Chair: Andreas Tolk (The MITRE Corporation) These two invited papers focusing on epistemological and ethical challenges of using simulation for decision making. While the application domain is defense, the implications are generalizable. The presentation will be followed by a 30 min discussion of the paper and the implications with the auditorium. Panel · Using Simulation to Innovate [In person] Panel Discussion on Simulation and AI Chair: Susan M. Sanchez (Naval Postgraduate School) Technical Session · Military and National Security Applications [In person] Multi-Agent Reinforcement Learning, Generative Methods, and Bayesian Neural Networks Chair: Andrew Hall (Marymount University, Institute for Defense Analysis) Technical Session · Simulation Education [In person] Simulation Education Chair: Andrew J. Collins (Old Dominion University); James F. Leathrum (Old Dominion University) Commercial Case Study · Commercial Case Studies [In person] Factory Environment Chair: Yusuke Legard (MOSIMTEC) Commercial Case Study · Commercial Case Studies [In person] Data Science Chair: David T. Sturrock (Simio LLC) Technical Session · MASM: Semiconductor Manufacturing [In person] Scheduling Applications in Semiconductor Manufacturing Chair: semya Elaoud (Flexciton Ltd) Technical Session · MASM: Semiconductor Manufacturing [In person] MASM 1 Chair: Raphael Herding (FTK – Forschungsinstitut für Telekommunikation und Kooperation e. V., Westfälische Hochschule) Technical Session · Military and National Security Applications [In person] Civil Infrastructure, Hostile Crowds and Military Modernization Chair: Nathaniel D. Bastian (United States Military Academy) Technical Session · Hybrid Simulation [In person] Hybrid Simulation Modeling and Methods Chair: Caroline C. Krejci (The University of Texas at Arlington) A RESTful Persistent DEVS-based Interaction Model for the Componentized WEAP and LEAP RESTful Frameworks pdfCommercial Case Study · Commercial Case Studies [In person] Machine Learning Applications Chair: Nathan Ivey (MOSIMTEC LLC) Technical Session · Healthcare Applications [In person] Applications of Simulation in Healthcare Chair: Daniel Garcia de Vicuna (Public University of Navarre) Physician Shift Scheduling to Improve Patient Safety and Patient Flow in the Emergency Department pdfTechnical Session · Data Science for Simulation [In person] DSS 3 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Commercial Case Study · Commercial Case Studies [In person] Data-Driven Simulation Chair: Nathan Ivey (MOSIMTEC LLC) Commercial Case Study · Commercial Case Studies [In person] Logistics Chair: Amy Greer (MOSIMTEC, LLC) Technical Session · Using Simulation to Innovate [In person] Using Simulation and Digital Twins to Innovate - Are We Getting Smarter? Chair: Simon J. E. Taylor (Brunel University London) Technical Session · Model Uncertainty and Robust Simulation [In person] New Advances in Simulation Optimization and Estimation Chair: Sara Shashaani (North Carolina State University) Technical Session · Model Uncertainty and Robust Simulation [In person] Sampling and Estimation Chair: Sara Shashaani (North Carolina State University) Non-parametric Uncertainty Bias and Variance Estimation via Nested Bootstrapping and Influence Functions pdfTechnical Session · Simulation in Industry 4.0 [In person] Simulation as Digital Twin in Industry 4.0 Framework I Chair: Dehghani Mohammad (Northeastern University) Technical Session · Simulation for Smart Cities [Virtual] Simulation for Smart Cities 2 Chair: Edward Y. Hua (MITRE Corporation) Technical Session [Virtual] Patient Centered Modeling and Clinical Trials Chair: Barry L. Nelson (Northwestern University) Panel · Healthcare Applications [Virtual] Panel on Simulation Modeling for Covid-19 Chair: Christine Currie (University of Southampton) Technical Session · Modeling Methodology [Virtual] DEVS Chair: Hessam Sarjoughian (Arizona State University) Specifying and Executing the Combination of Timed Finite State Automata en Causal-Block Diagrams by mapping onto DEVS pdfTechnical Session · Agent-based Simulation [Virtual] ABS Modeling Methodologies Chair: Bhakti Stephan Onggo (University of Southampton) CSonNet: An Agent-Based Modeling Software System for Discrete Time Simulation Best Contributed Applied Paper - Finalist pdfTechnical Session · Agent-based Simulation [Virtual] Simulations of Human Movements Chair: Chris Kuhlman (University of Virginia) Measuring Proximity of Individuals during Aircraft Boarding Process with Elderly Passengers through Agent-based Simulation pdfTechnical Session · Healthcare Applications [Virtual] Operations Management and Patient Flow Chair: Vijay Gehlot (Villanova University) Toolkit for Healthcare Professionals: A Colored Petri Nets Based Approach for Modeling and Simulation of Healthcare Workflows pdfTechnical Session · Aviation Modeling and Analysis [Virtual] Airport Operations Chair: John Shortle (George Mason University) On Static vs Dynamic (Switching Of) Operational Policies in Aircraft Turnaround Team Allocation and Management pdfInvited Paper, Contributed Paper, Technical Session · Aviation Modeling and Analysis [Virtual] Advanced Technologies in Air Transportation Chair: John Shortle (George Mason University) Technology Adoption in Air Traffic Management: A Combination of Agent-Based Modeling with Behavioral Economics pdfVendor · Vendor [Virtual] AutoMod Virtual User Group Technical Session · Healthcare Applications [Virtual] Simulating Disease Progression Chair: Varun Madhavan (IIT Kharagpur) Comparing Data Collection Strategies Via Input Uncertainty When Simulating Testing Policies Using Viral Load Profiles pdfTechnical Session · Scientific Applications [Virtual] Scientific Applications Chair: Rafael Mayo-García (CIEMAT); Esteban Mocskos (University of Buenos Aires (AR), CSC-CONICET) A Novel Cloud-based Framework for Standardized Simulations in the Latin American Giant Observatory (LAGO) pdfCommercial Case Study · Commercial Case Studies [Virtual] Manufacturing Optimization Chair: David T. Sturrock (Simio LLC) Commercial Case Study · Commercial Case Studies [Virtual] Haulage Operations Chair: Devdatta Deo (Simio LLC) Technical Session · Simulation for Smart Cities [Virtual] Simulation for Smart Cities Chair: Mina Sartipi (University of Tennessee at Chattanooga); Edward Y. Hua (MITRE Corporation); Sanja Lazarova-Molnar (University of Southern Denmark, SDU) Technical Session, Introductory Tutorial · Introductory Tutorials [Virtual] Agent-Based Modeling and Simulation for Management Decisions: A Review and Tutorial Chair: Giovanni Lugaresi (Politecnico di Milano) Technical Session · Data Science for Simulation [Virtual] DSS 1 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Detecting Communities and Attributing Purpose to Human Mobility Data Best Contributed Applied Paper - Finalist pdfInput Data Modeling: An Approach Using Generative Adversarial Networks Best Contributed Theoretical Paper - Finalist pdfPanel · Simulation and Philosophy [Virtual] Panel on Ethical Considerations for Validation Chair: Andreas Tolk (The MITRE Corporation) Technical Session · Environment and Sustainability Applications [Virtual] Environmental and Sustainability Applications 2 Chair: Adrian Ramirez Nafarrate (ITAM) Dynamic Modeling and Sensitivity Analysis of a Stratified Heat Storage Coupled with a Heat Pump and an Organic Rankine Cycle pdfTechnical Session · Logistics, Supply Chains, and Transportation [Virtual] Retail Logistics Chair: Lieke de Groot (Belsimpel) Environmental Sustainability as Food for Thought! Simulation-based Assessment of Fulfillment Strategies in the e-Grocery Sector pdfTechnical Session · Logistics, Supply Chains, and Transportation [Virtual] Local Transport Chair: Bhakti Stephan Onggo (University of Southampton) A Simulation-Based Approach to Compare Policies and Stakeholders' Behaviors for the Ride-Hailing Assignment Problem pdfPlenary · MASM: Semiconductor Manufacturing MASM Keynote: Chip Technology Innovations and Challenges for Process Tool Scheduling and Control Chair: Hyun-Jung Kim (KAIST) Technical Session [Virtual] Insights Chair: Joe Viana (BI Norwegian Business School, Department of Accounting and Operations Management) Assessing Resilience of Medicine Supply Chain Networks to Disruptions: A Proposed Hybrid Simulation Modeling Framework pdfTechnical Session · Data Science for Simulation [Virtual] DSS 4 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) Combining Simulation and Machine Learning for Response Time Prediction for the Shortest Remaining Processing Time Discipline pdfTechnical Session · Logistics, Supply Chains, and Transportation [Virtual] Complex Logistics Systems Chair: Ralf Elbert (Technische Universität Darmstadt) Technical Session · Using Simulation to Innovate [Virtual] Simulation Analytics for Smart Digital Twin Chair: Haobin Li (National University of Singapore, Centre for Next Generation Logistics) Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Public Disaster Management Chair: Reha Uzsoy (North Carolina State University) Supporting Hospital Logistics during the First Months of the COVID-19 Crisis: A Simheuristic for the Stochastic Team Orienteering Problem pdfTechnical Session · Logistics, Supply Chains, and Transportation [Virtual] Last Mile Logistics Chair: Canan Gunes Corlu (Boston University) A Hybrid Modeling Approach for Automated Parcel Lockers as a Last-Mile Delivery Scheme: A Case Study in Pamplona (Spain) pdfParcel Delivery for Smart Cities: A Synchronization Approach for Combined Truck-Drone-Street Robot Deliveries pdfTechnical Session · Project Management and Construction [Virtual] Scheduling and Dynamic Simulation Chair: Fang Xu (University of Florida) Analyzing Impact Of Semi-Productive Work Hours In Scheduling And Budgeting Labor-Intensive Projects: Simulation-Based Approach pdfTechnical Session · Simulation Education [Virtual] Simulation Education Chair: Jakub Bijak (University of Southampton); Kristina Eriksson (University West, Sweden) An Educational Model for Competence Development within Simulation and Technologies for Industry 4.0 pdfTechnical Session · Healthcare Applications [Virtual] Operations Management and Patient Flow II Chair: Christine Currie (University of Southampton) Contributed Paper, Technical Session · Modeling Methodology [Virtual] Modeling Methodologies 1 Chair: Ezequiel Pecker Marcosig (UBA, CONICET) Technical Session · Simulation in Industry 4.0 [Virtual] Simulation as Digital Twin in Industry 4.0 Framework II Chair: Lauren Czerniak (University of Michigan) Applying Simheuristics for Safety Stock and Planned Lead Time Optimization in a Rolling Horizon MRP System under Uncertainty pdfImproving Simulation Optimization Run Time When Solving for Periodic Review Inventory Policies in a Pharmacy pdfTechnical Session · Military and National Security Applications [Virtual] Medical Evacuation, Security Screening, and Unmanned Underwater Vehicles Chair: Danielle Morey (University of Washington) Analyzing the Impact of Triage Classification Errors on Military Medical Evacuation Dispatching Policies pdfA Simulation-Optimization Approach to Improve the Allocation of Security Screening Resources in Airport Terminal Checkpoints pdfTechnical Session · Military and National Security Applications [Virtual] Naval Forces, Population Dynamics, and Amphibious Platform Cost Analysis Chair: Michèle Fee (Defence Research and Development Canada) Technical Session · Analysis Methodology [Virtual] Metamodeling and Simulation Optimization Chair: Haoting ZHANG (University of California, Berkeley; IEOR Department) Technical Session · MASM: Semiconductor Manufacturing [Virtual] Factory Operations 2 Chair: Reha Uzsoy (North Carolina State University) Technical Session · Logistics, Supply Chains, and Transportation [Virtual] Parcel Supply Chains Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities) Last-Mile Delivery of Pharmaceutical Items to Heterogeneous Healthcare Centers with Random Travel Times and Unpunctuality Fees pdfCombining Simulation with Reliability Analysis in Supply Chain Project Management under Uncertainty: A Case Study in Healthcare pdfTechnical Session · Analysis Methodology [Virtual] Sampling Methodology and Reliability Chair: Dashi I. Singham (Naval Postgraduate School) Technical Session · Data Science for Simulation [Virtual] DSS 2 Chair: Hamdi Kavak (George Mason University); Abdolreza Abhari (Ryerson University) ExecutionManager: A Software System to Control Execution of Third-Party Software that Performs Network Computations pdfAutoML Approach to Classification of Candidate Solutions for Simulation Models of Logistic Systems pdfTechnical Session · Healthcare Applications [Virtual] Applications of Simulation in Healthcare II Chair: Iman Attari (Indiana University Bloomington) Technical Session · Modeling Methodology [Virtual] Tools and Environments Chair: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET) Technical Session · Modeling Methodology [Virtual] Applications in engineering and social systems Chair: Abdurrahman Alshareef (Arizona State University) Simulation Case Studies On An Advanced Sensitivity Analysis For New Extended Bus Types In The Modern Power Systems pdfData-driven Exploration of Lentic Water Bodies with ASVs Guided by Gradient-free Optimization/Contour Detection Algorithms pdfTechnical Session · Manufacturing Applications [Virtual] MA 1 Chair: Alexandru Rinciog (TU Dortmund University) Simulation of stochastic rolling horizon forecast behavior with applied outlier correction to increase forecast accuracy pdfA Biased-Randomized Discrete-Event Heuristic for the Permutation Flow Shop Problem with Multiple Paths pdfVendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Agent based models for Tracking the Spread of Covid-19 Chair: Esteban Lanzarotti (DC-ICC, UBA-CONICET) A Multi-aspect Agent-based Model of Covid-19: Disease Dynamics, Contact Tracing Interventions and Shared Space-driven Contagions pdfTechnical Session · MASM: Semiconductor Manufacturing [Virtual] MASM 2 Chair: Abdelgafar Hamed (Infineon Technologies AG) On Scheduling A Photolithography Toolset Based On A Deep Reinforcement Learning Approach With Action Filter pdfTechnical Session, Introductory Tutorial · Introductory Tutorials [Virtual] A tutorial on Participative Discrete Event Simulation in the virtual workshop environment Chair: Hossein Piri (University of British Columbia, Sauder School of Business) Commercial Case Study · Commercial Case Studies [Virtual] Productivity and Manufacturing Chair: Devdatta Deo (Simio LLC) Vendor · Vendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Model Uncertainty and Robust Simulation [Virtual] Simulation Optimization, Prediction, and Estimation Chair: Ilya Ryzhov (University of Maryland) Technical Session · Simulation Optimization [Virtual] Applications Chair: Jeff Hong (City University of Hong Kong) Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Modeling the Spread of COVID-19 Chair: Glenn Davidson (Carleton University) Invited Paper, Contributed Paper, Technical Session · Analysis Methodology [Virtual] Estimation Methodology 2 Chair: Yijie Peng (George Mason University) Technical Session · Model Uncertainty and Robust Simulation [Virtual] Data-driven Simulation Optimization Chair: Hoda Bidkhori (University of Pittsburgh) Distributionally Robust Cycle and Chain Packing with Application to Organ Exchange Best Contributed Theoretical Paper - Finalist pdfVendor [Virtual] Platinum Sponsor Session Chair: Amy Greer (MOSIMTEC, LLC); Claudia Szabo (University of Adelaide, The University of Adelaide) Technical Session · Modeling Methodology [Virtual] Queueing Models Chair: Veronica Gil-Costa (UNSL, CCT CONICET San Luis) Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [Virtual] V&V for M&S of Complex Systems Chair: Hessam Sarjoughian (Arizona State University) Simulation and Model Validation for Mental Health Factors Using Multi-Methodology Hybrid Approach pdfAdvanced Tutorial · Advanced Tutorials [Virtual] Toward Unbiased Deterministic Total Orderings of Parallel Simulations with Simultaneous Events Chair: Soumyadip Ghosh (IBM T. J. Watson Research Center) Commercial Case Study · Commercial Case Studies [Virtual] Supply Chains and Virtual Plants Chair: David T. Sturrock (Simio LLC) Technical Session · Simulation Optimization [Virtual] Ranking & Selection Chair: Sait Cakmak (Georgia Institute of Technology) Plenary · PhD Colloquium PhD Colloquium Keynote Chair: Chang-Han Rhee (Northwestern University) Doctoral Colloquium · PhD Colloquium PhD Colloquium I Chair: Chang-Han Rhee (Northwestern University) Hyperparameter Optimization of Deep Neural Network with Applications to Medical Device Manufacturing pdfEstimating the Effectiveness of Non-pharmaceutical Interventions in Heterogeneous Populations During an Emerging Infectious Disease Epidemic pdfApplying Discrete-event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant pdfReal-time Generation and Exploitation of Discrete Event Simulation Models as Decision-support Tools for Manufacturing pdfDoctoral Colloquium · PhD Colloquium PhD Colloquium II Chair: Chang-Han Rhee (Northwestern University) Investigating Cloud-based Architecture for Distributed Simulation (ds) in Operational Reasearch (or) pdfUsing Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease pdfMethodological Improvements Of Online Pandemic Simulation For Short-term Healthcare Resource Prediction pdfDoctoral Colloquium · PhD Colloquium PhD Colloquium Poster Session Chair: Chang-Han Rhee (Northwestern University) Hyperparameter Optimization of Deep Neural Network with Applications to Medical Device Manufacturing pdfReal-time Generation and Exploitation of Discrete Event Simulation Models as Decision-support Tools for Manufacturing pdfInvestigating Cloud-based Architecture for Distributed Simulation (ds) in Operational Reasearch (or) pdfApplying Discrete-event Simulation and Value Stream Mapping to Reduce Waste in an Automotive Engine Manufacturing Plant pdfUsing Longitudinal Health Records to Simulate the Impact of National Treatment Guidelines for Cardiovascular Disease pdfEstimating the Effectiveness of Non-pharmaceutical Interventions in Heterogeneous Populations During an Emerging Infectious Disease Epidemic pdfPanel · Plenary [Virtual] What follows after tenure? Chair: Cristina Ruiz-Martín (Carleton University) Technical Session · Complex, Intelligent, Adaptive and Autonomous Systems [Virtual] Complexity Management Chair: Saurabh Mittal (MITRE Corporation) Invited Paper, Contributed Paper, Technical Session · MASM: Semiconductor Manufacturing [Virtual] Factory Operations 1 Chair: Abdelgafar Hamed (Infineon Technologies AG) Technical Session · Simulation Optimization [Virtual] Gradient-Based Optimization Chair: Soumyadip Ghosh (IBM T. J. Watson Research Center) Technical Session · Covid-19 and Epidemiological Simulations [Virtual] Effectiveness of Interventions against the Spread of Covid-19 Chair: Erik Rosenstrom (NC State) Invited Paper, Contributed Paper, Technical Session · Project Management and Construction [Virtual] Simulation for Emergency Planning and Response Chair: Wenying Ji (George Mason University) Public Demand Estimation following Disasters through Integrating Social Media and Community Demographics pdfCreating an Inter-hospital Resilient Network for Pandemic Response Based on Blockchain and Dynamic Digital Twins pdfTechnical Session · Poster Session Poster Session Chair: María Julia Blas (INGAR CONICET UTN) Streamlining The United States Immigration Court System: Using Simulation and Data Science to Effectively Deploy Capacity pdfTechnical Session · Healthcare Applications [Virtual] Applications of Simulation in Healthcare I Chair: F. LeRon Shults (University of Agder) Panel [Virtual] Panel Session: Coffee with ... Chair: Cristina Ruiz-Martín (Carleton University) Invited Paper · Hybrid Simulation [Virtual] Hybrid Simulation Modeling Chair: Nurul Izrin Md Saleh (Universiti Malaysia Kelantan) Technical Session · Project Management and Construction [Virtual] Simulation for Built Environment and Construction Logistics Chair: Khandakar Rashid (Arizona State University) Agent-based Simulation to Predict Occupants' Physical Distancing Behaviors in Educational Buildings pdfTechnical Session · Project Management and Construction [Virtual] Digital Twins in the Built Environment Chair: Fernanda Leite (The University of Texas at Austin) An Automated Framework For Generating Synthetic Point Clouds from As-built BIM with Semantic Annotation for Scan-to-BIM pdfCommercial Case Study · Commercial Case Studies [Virtual] Digital Twin Chair: David T. Sturrock (Simio LLC) Digital twin-based applications for assembly production lines of global automotive part suppliers pdf |