PlenaryMonday Keynote Chair: Renee M. Thiesing (Simio LLC)
The Rapid Democratization and Integration of Data with Simulation, Optimization and Artificial Intelligence Ben Amaba (IBM Data Sciences and Artificial Intelligence Team Elite) Abstract AbstractMany CEOs, CTOs, senior executives, and other decision-makers are seeing an advantage from the rise of big data and faster computing power. Data is now a board responsibility. The question arises: How can all this data drive innovation? Being able to harness the power of data through simulation, optimization, artificial intelligence (AI) and machine learning (ML) can help to improve financial, sales, manufacturing and supply-chain operations; enable a better, more intimate customer experience; or reduce downtime if done correctly. Waiting for the perfect environment is no longer a strategy, but an agile learning culture is the priority.
With open source, hybrid clouds, high speed networks, increased computing power, and responsive platforms; simulation, optimization, AI and ML are being democratized and intertwined at a rapid pace. McKinsey estimates that AI could potentially deliver additional economic output of around $13 trillion by 2030, boosting global GDP by about 1.2 percent a year. Advancements in computing power and open source technologies have become a competitive advantage of day-to-day business by fundamentally improving the way the industry operates. Recent progress seeks to radically change our operations and workflows. To enable and maximize the creation of value, the integration and utilization of these data science technologies coupled with a rigorous approach is required. The overhead of data preparation, model governance, bias, trust, and deployment continue to inhibit democratization. The shortage of talent to promote and apply the interdisciplinary computer, mathematical and domain knowledge places many projects in pilot purgatory. The session will explore the future of the Digital Transformation and the use of advanced analytics to create an integrated business model where data becomes more than just a single source of truth, but a strategic asset. pdf PlenaryTuesday Keynote Chair: Theresa Roeder (San Francisco State University)
Health Resource Allocation: Lessons for Today from Past Outbreaks Stephen Chick (INSEAD) Abstract AbstractRecent events have brought mathematical modelling of infectious disease transmission and control to the forefront. An appropriate choice of mathematical model depends of course on the decision problem to be informed, yet there can be uncertainties about technical, social, and operational parameters of a model. And each model necessarily makes assumptions, for better or for worse. In this talk, we discuss different types of models for supporting decisions for cost-effective disease management decisions, touching on stochastic models, Bayesian methods, and simulation optimization. We then illustrate some obvious and some not-so-obvious ways that the choice of model is important, by drawing upon examples from the presenter’s experience with projects to address influenza, vaccination, waterborne infections, Creutzfeldt-Jakob Disease and clinical trial design. We highlight the importance of problem selection and collaboration. pdf
MASM: Semiconductor Manufacturing, PlenaryWednesday Keynote Chair: Lars Moench (University of Hagen)
Industry 3.5 as Hybrid Strategy empowered by AI & Big Data Analytics and Collaborative Research with Micron Taiwan for Smart Manufacturing Chen-Fu Chien (National Tsing Hua University) Abstract AbstractThe paradigm of global manufacturing is shifting as leading nations proposing next phase of industrial revolution for Industry 4.0 by Germany and reemphasizing the importance of advanced manufacturing such as AMP in USA. Driven by Moore’s Law, semiconductor manufacturing is one of the most complex industries for continuous migration of advanced technologies for manufacturing excellence. Micron Technology is a world leading producer of semiconductor memory and computer data storage that has established one of her largest manufacturing bases in Taiwan through a number of acquisitions of local fabs as well as investments of new fabs. Industry 3.5 was proposed as a hybrid strategy between the best practice of the existing Industry 3.0 and to-be Industry 4.0 to address fundamental objectives for smart manufacturing while employing artificial intelligence and big data analytics as means objectives for manufacturing intelligence solutions. This speech will introduce Industry 3.5 and use a number of empirical studies under the existing infrastructure for validation. Furthermore, collaborative research with Micron for smart manufacturing will be used to illustrate our continuous efforts employing artificial intelligence, big data analytics, optimization, and intelligent decision for smart manufacturing and digital transformation. This talk will conclude with discussions of the implications of Industry 3.5 as alternative for Industry 4.0 to empower humanity in the ongoing industrial revolution. pdf
Military Applications and Homeland Security, PlenaryMilitary Keynote Chair: Nathaniel Bastian (Joint Artificial Intelligence Center, Department of Defense)
Combining AI with M&S to Meet Emerging Military Challenges Peter Schwartz (MITRE Corporation) Abstract AbstractThe U.S. is returning to a state of great power competition. The U.S. military must once again contend with near-peer adversaries that can bring to bear advanced weapon systems that are used in coordination with diplomatic, information, military, and economic (DIME) instruments of national power. In response to these challenges, the U.S. military is turning to new concepts of warfare such as Multi-Domain Operations (MDO) and Joint All-Domain Command and Control (JADC2). These concepts seek to orchestrate capabilities more tightly across domains (land, air, maritime, space, and cyberspace) as a means to converge effects rapidly and dynamically. This approach to warfare can provide U.S. commanders with a greater variety of options while presenting an adversary with multiple simultaneous dilemmas; however, it can also present U.S. commanders and their staffs with a far more complex battlespace and much shorter planning and decision timelines than they have faced in the past.
The U.S. Department of Defense is looking to artificial intelligence (AI) and machine learning (ML) as potential technologies to support the execution of MDO and JADC2. AI and ML are often combined with models and simulations (M&S) to provide enhanced capabilities. This talk will present different configurations that combine AI/ML with M&S and discuss their potential military applications. It will conclude with a presentation of a prototype course of action (COA) analysis tool that has been developed for the Army, including the specific way this tool combines AI with M&S and future work that will enable it to better support MDO and JADC2. pdf
PlenaryFriday Keynote Chair: Theresa Roeder (San Francisco State University)
Panel Discussion: Being a Simulation Professional in this Virtual World Carrie Beam (University of Arkansas), Ken Buxton (Accenture Strategy), Traci McIntyre (Kinaxis), and Jeffrey Smith (Auburn University) Abstract Abstract2020 has forced many of us into uncharted waters in terms of how we teach and do business online. This panel will share their experiences working and teaching virtually: challenges and successes, best practices and maybe a few horror stories. Join us for a lively exchange on the advantages and disadvantages of working from home. pdf
Track Coordinator - Advanced Tutorials: Wai Kin (Victor) Chan (Tsinghua-Berkeley Shenzhen Institute, TBSI), Hong Wan (North Carolina State University)
Advanced Tutorials
Advanced Statistical Methods: Inference, Variable Selection, and Experimental Design
Chair: Susan R. Hunter (Purdue University)
Ilya Ryzhov (University of Maryland), Qiong Zhang (Clemson University), and Ye Chen (Virginia Commonwealth University)
Abstract Abstract
We provide a tutorial overview of recent advances in three methodological streams of statistical literature: design of experiments, variable selection, and approximate inference. For some of these areas (such as design of experiments), their connections to simulation research have long been known and appreciated; in other cases (such as variable selection), however, these connections are only now beginning to be built. Our presentation focuses primarily on the statistical literature, aiming to show state-of-the-art thinking with regard to these problems, but we also point out possible opportunities to use these methods in new ways for both theory and applications within simulation.
pdf
Advanced Tutorials
Verification and Validation of Simulation Models: An Advanced Tutorial
Chair: Dave Goldsman (Georgia Institute of Technology)
Robert G. Sargent (Syracuse University)
Abstract Abstract
Verification and validation of simulation models are discussed in this paper. Different approaches to deciding model validity are described and a graphical paradigm that relates verification and validation to the model development process is presented and explained. Conceptual model validity, model verification, operational validity, and data validity are discussed, documentation is briefly covered, and a recommended procedure for model validation is presented. References for further information are provided when the various aspects of conducting verification and validation of simulation models are discussed.
pdf
Advanced Tutorials
Using Simple Dynamic Analytic Framework to Characterize and Forecast Epidemics
Chair: Ilya Ryzhov (University of Maryland)
Amna Tariq, Kimberlyn Roosa, and Gerardo Chowell (Georgia State University)
Abstract Abstract
Mathematical modeling provides a powerful analytic framework to investigate the transmission and control of infectious diseases. However, the reliability of the results stemming from modeling studies heavily depend on the validity of assumptions underlying the models as well as the quality of data that is employed to calibrate them. When substantial uncertainty about the epidemiology of newly emerging diseases (e.g. the generation interval, asymptomatic transmission) hampers the application of mechanistic models that incorporate modes of transmission and parameters characterizing the natural history of the disease, phenomenological growth models provide a starting point to make inferences about key transmission parameters, such as the reproduction number, and forecast the trajectory of the epidemic in order to inform public health policies. We describe in detail the methodology and application of three phenomenological growth models, the generalized-growth model, generalized logistic growth model and the Richards model in context of the COVID-19 epidemic in Pakistan.
pdf
Advanced Tutorials
Business Process Modeling and Simulation with DPMN: Resource-Constrained Activities
Chair: Sara Shashaani (North Carolina State University)
Gerd Wagner (Brandenburg University of Technology)
Abstract Abstract
This tutorial article, which is extracted from (Wagner 2019), shows how to use UML Class Diagrams and Discrete Event Process Modeling Notation (DPMN) Process Diagrams for making simulation models of business processes with resource-constrained activities based on the DES paradigm of Object Event Modeling and Simulation. In this approach, the state structure of a business system is captured by a UML Class Diagram, which defines the types of objects, events and activities underlying a DPMN Process Diagram, which captures the causal regularities of the system in the form of a set of event rules. DPMN Process Diagrams extend the Event Graphs proposed by Schruben (1983) by adding elements from the Business Process Modeling Notation (BPMN), viz. data objects and activities, and, as its main innovation over BPMN, resource-dependent activity start arrows.
pdf
Advanced Tutorials
Robustness Revisited: Simulation Optimization Viewed Through a Different Lens
Chair: Xi Chen (Virginia Tech)
Susan M. Sanchez and Paul J. Sanchez (Naval Postgraduate School)
Abstract Abstract
We start by introducing key concepts in robust design and analysis, and demonstrate how robustness often changes our perspective when contrasted with simulation optimization approaches. After defining basic terminology, we present several numerical examples with discussions of how to apply these techniques in qualitative, quantitative, and optimization contexts. Evaluating responses using loss functions can yield solutions and results that are substantially different from those based solely on expected values. Benefits in engineering practice include that robust solutions are advantageous in moving from new product development to production, in focusing decision makers on controllable aspects of their problem, and in facilitating communication between the various stakeholders. Robust solutions are designed to yield consistently good performance even in the face of uncertainty and uncontrollable factors by incorporating
those aspects of the system into the problem formulation.
pdf
Advanced Tutorials
Blockchain: A Review from The Perspective of Operations Researchers
Chair: Wei Xie (Northeastern University)
Hong Wan, Yining Huang, and Kejun Li (North Carolina State University)
Abstract Abstract
Blockchain is a distributed, append-only digital ledger (database). The technology has caught much attention since the emergence of cryptocurrency, and there is an increasing number of blockchain applications in a wide variety of businesses. The concept, however, is still novel to many members of the simulation and operations research community. In this tutorial, we introduce the blockchain technology and review it's frontier operations-and-data-related research. There are exciting opportunities for researchers in simulation, system analysis, and data science.
pdf
Track Coordinator - Agent-Based Simulation: Chris Kuhlman (University of Virginia), Bhakti Stephan Onggo (University of Southampton)
Agent-based Simulation
Simulations of Infrastructure Systems I
Chair: Dhanan Utomo (Heriot-Watt University)
Simulation and Optimization of Traction Unit Circulations
Matthias Rößler and Matthias Wastian (dwh GmbH), Anna Jellen and Sarah Frisch (Alpen-Adria-Universität Klagenfurt), Dominic Weinberger and Philipp Hungerländer (hex GmbH), and Martin Bicher and Niki Popper (Technische Universität Wien)
Abstract Abstract
The planning of traction unit circulations in a railway network is a very time-consuming task.
In order to support the planning personnel, the paper proposes a combination of optimization, simulation and machine learning. This ensemble creates mathematically nearly optimal circulations that are also feasible in real operating procedures. An agent-based simulation model is developed that tests the circulation for its robustness against delays. The delays introduced into the system are based on predictions from a machine learning model built upon historical operational data. The paper first presents the used data and the delay prediction. Afterwards, the modeling and simulation part and the optimization are presented. At last, the interaction of simulation and optimization are described and promising results of a test case are shown.
pdf
Assessing Strain on Hospital Capacity During a Localized Epidemic Using a Calibrated Hospitalization Microsimulation
Kasey Jones and Emily Hadley (RTI International), Eric Lofgren (Washington State University), and Sarah Rhea (RTI International)
Abstract Abstract
The ability of healthcare systems to provide patient care can become disrupted and overwhelmed during a major epidemic or pandemic. We adapted an existing hospitalization microsimulation of North Carolina to assess the impact of a localized epidemic of a fictitious pathogen on inpatient hospital bed availability in the same locale. As area hospital beds reach capacity, agents are turned away and seek treatment at different hospital locations. We explore how variability in the duration and severity of an epidemic affects hospital capacity in different North Carolina counties. We analyze various epidemic scenarios and provide insights into how many days counties and hospitals would have to prepare for a surge in capacity.
pdf
Assessing the Impact of Heterogeneous Traffic on Highways via Agent-Based Simulations
Dhruv Nair, Sudarshan Yerragunta, Balasubramanian Kandaswamy, and Hrishikesh Venkataraman (Indian Institute of Information Technology, Sri City, Chittoor)
Abstract Abstract
Rules that govern highways are very different across countries. For instance, in the USA, most of the highway traffic is comprised of cars and trucks; whereas in India, there are more types of vehicles that use the highway. Each vehicle type has its characteristics and capabilities, thus causing variation in driver preferences, especially the preferred-speed. We investigate if this heterogeneity leads to an increase in the number of lane changes, which could potentially lead to an increase in accidents. We use Agent-Based Modeling to compare the interaction between vehicles in two simulations, one representative of the traffic in the USA and the second representative of traffic in India. The results show that increased heterogeneity in vehicle types causes a significant increase in the number of lane changes. These results have broader implications for traffic policy-making and bring into focus the need for minimum-speed limits and dedicated lanes for slower vehicles.
pdf
Agent-based Simulation
Simulations of Infrastructure Systems II
Chair: Bhakti Stephan Onggo (University of Southampton)
Evaluation of Guidance Systems at Dynamic Public Transport Hubs using Crowd Simulation
Michael Wagner (TUMCREATE Limited); Philipp Andelfinger (Nanyang Technological University, TUMCREATE LIMITED); Henriette Cornet (TUMCREATE Limited); Wentong Cai (Nanyang Technological University); Alois Christian Knoll (Technische Universität München, TUMCREATE Limited); and David Eckhoff (TUMCREATE Limited)
Abstract Abstract
A key challenge in the implementation of novel public transport systems is to maintain usability over a broad spectrum of potential users.
Transport systems that increasingly emphasise dynamic adjustment to changing passenger numbers and destinations over time cannot rely on static schedules and routes like traditional systems do.
In this work we are investigating the use of agent-based crowd simulation to evaluate how different passenger guidance systems affect agent navigation in a public transport hub.
We study the effects of different digital signage placement strategies in terms of crowding and walking times and also analyse how the introduction of mobile phone guidance systems affects these metrics.
Our results show that crowd simulation is a cost and time-efficient tool for the evaluation of guidance systems in public transport spaces that can also support the design of bus schedules and bay assignments.
pdf
PHASE: Facilitating Agent-Based Modelling in Population Health
Eric Silverman and Umberto Gostoli (University of Glasgow)
Abstract Abstract
Agent-based modelling (ABM), despite numerous successes in various disciplines of the physical and natural sciences, remains at the fringes of population health research. ABM can contribute to public health policy-making by providing a means to develop and test ambitious policies on virtual populations prior to roll-out, and to incorporate detailed individual-level modelling of relevant behavioral processes. Here we introduce PHASE: Population Health Agent-based Simulation nEtwork, a research network started in October 2019 and funded by the UK Prevention Research Partnership that will develop and support the community of agent-based modellers in population health. We then present a worked example of ABM being applied to social care provision in the United Kingdom, demonstrating how our model facilitates the development of complex policy interventions in this area. We propose that ABM for population health research can thrive when underpinned by a strong collaborative network and supported by open-source tools and exemplar models.
pdf
Long Haul Logistics Using Electric Trailers by Incorporating an Energy Consumption Meta-Model Into Agent-Based Model
Dhanan Sarwo Utomo, Adam Gripton, and Philip Greening (Heriot-Watt University)
Abstract Abstract
This paper presents the preliminary result of an Agent-Based Modeling Study (ABMS) that analyzes an electrification strategy for the UK’s long haul logistics operations. Because long haul logistics is very energy intensive, the dynamics of the trailer’s energy consumption must be taken into account. Engineering approaches are computationally expensive and inhibit us from modeling interactions within the entire fleet of trailers. This paper proposes an alternative approach to model the vehicle’s energy consumption. Our model validation shows that the ABMS can replicate a real world operator’s operations with sufficient accuracy. Subsequently we use our ABMS to evaluate the potential benefits of using electric trailers in the operator’s fleet.
pdf
Agent-based Simulation
Simulations of Human Movement
Chair: Martijn Mes (University of Twente)
Agent-based Digital Twins (ABM-DT) in Synchromodal Transport and Logistics: The Fusion of Virtual and Physical Spaces
Tomas Ambra and Cathy Macharis (Vrije Universiteit Brussel - MOBI research center)
Abstract Abstract
Synchromodality/Synchromodal transport is to support real-time optimal integration of different transport modes and infrastructure in order to induce a modal shift from road to inland waterways and rail. Such an integration will contribute to making modal choices, synchronization of orders and available capacities more dynamic, flexible and acceptable in terms of costs and lead-times. In this regard, new technologies and their real-time inputs have to interact with freight models to support decision makers on a continuous basis. This is why a symbiosis between virtual environments and physical environments is proposed that can bring academic models closer to the end users. The paper demonstrates a first proof-of-concept for long-distance Digital Twin solutions by connecting real-time data feeds from the physical system to a virtual GIS environment that can be utilized in real-time synchromodal deliveries.
pdf
Reshaping Airpower: Development of an IMPRINT Model to Analyze the Effects of Manned-Unmanned Teaming on Operator Mental Workload
Jinan M. Andrews, Christina F. Rusnock, and Michael E. Miller (Air Force Institute of Technology) and Douglas P. Meador (Air Force Research Laboratory)
Abstract Abstract
Due to the advent of autonomous technology coupled with the expense of manned aircraft, the Department of Defense is developing affordable, expendable Unmanned Aerial Vehicles (UAVs) to be operated in conjunction with jet fighters. With a single pilot commanding the UAVs while piloting their aircraft, operators may find it challenging to manage all systems should the system design not be conducive to a steady state level of workload. To understand the potential effects of manned-unmanned teaming on the pilot’s cognitive workload, an Improved Performance Research Integration Tool workload model was developed. The model predicts pilot workload in a simulated environment when interacting with the cockpit and multiple UAVs to provide insight into the effect of Human-Agent Interactions on workload and mission performance. This research concluded that peaks in workload occur for the pilot during periods of high communications load and this communication may be degraded or delayed during air-to-air engagements.
pdf
Multi-thread State Update Schemes for Microscopic Traffic Simulation
Best Contributed Applied Paper - Finalist
Wen Jun Tan and Philipp Andelfinger (Nanyang Technological University, TUMCREATE); Yadong Xu (TUMCREATE); Wentong Cai (Nanyang Technological University); Alois Knoll (Technische Universität München, Nanyang Technological University); and David Eckhoff (TUMCREATE)
Abstract Abstract
Microscopic traffic simulation is an essential tool for the evaluation of intelligent transportation systems (ITS). With the increasing complexity of ITS applications, higher-detail simulation models, and the need to analyze large-scale scenarios, simulation run-times can grow exceedingly large. One way to counter this problem is the use of parallel computing techniques, such as shared-memory multi-thread parallelism. While the foundations of parallel traffic simulation are well-known, the effects of different synchronization and agent-update mechanisms on simulation performance have not been explored systematically. In this paper, we first analyze the common properties of models used in microscopic traffic simulation to understand the impact of their data dependencies. We discuss synchronous and asynchronous agent update schemes and compare them in terms of performance and requirements. We conclude that although it requires more memory and additional conflict handling, the synchronous agent-state updating approach is favourable in terms of scalability.
pdf
Agent-based Simulation
Simulations of Crowds and Groups
Chair: Chris Kuhlman (University of Virginia)
Generating Hypotheses on Prehistoric Cultural Transformation with Agent-based Evolutionary Simulation
Fumihiro Sakahira (KOZO KEIKAKU Engineering inc.); Yuji Yamaguchi (Okayama University); Ryoya Osawa, Toshifumi Kishimoto, and Taka’aki Okubo (Doshisha University); Takao Terano (Chiba University of Commerce); and Hiro’omi Tsumura (Doshisha University)
Abstract Abstract
We propose an agent-based evolutionary simulation analogous to a genetic algorithm for generating hypotheses on prehistoric cultural transformation. As an application case study, we examine the mechanism of change in the composition of structural remains at the Jomon to Yayoi period sites in Western Japan. The simulations generate hypotheses that the major changes from the middle to the late Jomon periods and from the final Jomon period to the early Yayoi period may have been caused by different mechanisms. The latter could be interpreted as a continuous mechanism, such as inter-settlement exchanges, while the former could be interpreted as a non-continuous mechanism.
pdf
Crowd Evacuation During Slashing Terrorist Attack: A Multi-Agent Simulation Approach
Fa Zhang (Zhuhai School, Beijing Institute of Technology) and Shihui Wu and Zhihua Song (Air Force Engineering University)
Abstract Abstract
Attacks by terrorist using sharp objects such as knives and axes occurred frequently. The evolution of the slashing event involves many factors. A system exploration is needed to reveal the mechanism, find key factors, and choose effective response. We built agent-based model of terrorist and civilian and explored the process of evacuation during slashing attack. We formalized the attack process of terrorist using a finite state machine, and analyzed the characteristics of sharp weapons. We proposed a civilian behavior model based on the perception-decision-behavior framework. In emergency state, civilian takes action based on the current situation and his/her individual characteristics. Based on the multi-agent model, we developed a simulation software. Evacuation in slashing event is simulated to study the relationship between civilian’s characteristics and the event consequence. The results show that the civilian’s observation range and risk sensitivity have significant impact on the number of casualties.
pdf
An Agent-Based Model of Common Knowledge and Collective Action Dynamics on Social Networks
Chris J. Kuhlman, Gizem Korkmaz, and S. S. Ravi (University of Virginia) and Fernando Vega-Redondo (Bocconi University)
Abstract Abstract
Protest is a collective action problem and can be modeled as a coordination game in which people take an action with the potential to achieve shared mutual benefits. In game-theoretic contexts, successful coordination requires that people know each others’ willingness to participate, and that this information is common knowledge among a sufficient number of people. We develop an agent-based model of collective action that was the first to combine social structure and individual incentives. Another novel aspect of the model is that a social network increases in density (i.e., new graph edges are formed) over time. The model studies the formation of common knowledge through local interactions and the characterizing social network structures. We use four real-world, data-mined social networks (Facebook, Wikipedia, email, and peer-to-peer networks) and one scale-free network, and conduct computational experiments to study contagion dynamics under different conditions.
pdf
Agent-based Simulation
Simulation-Supporting Methodologies
Chair: Wentong Cai (Nanyang Technological University)
Using Agent-based Simulation for Emergent Behavior Detection in Cyber-physical Systems
Rob Bemthuis, Martijn Mes, Maria-Eugenia Iacob, and Paul Havinga (University of Twente)
Abstract Abstract
Traditional modeling approaches, based on predefined business logic, offer little support for today's complex environments. In this paper, we propose a conceptual agent-based simulation framework to help not only discover complex business processes but also to analyze and learn from emergent behavior arising in cyber-physical systems. Techniques originating from agent-based modeling as well as from the process mining discipline are used to reinforce agent-based decision-making. Whereas agent-technology is used to orchestrate the integration and relationship between the environment and business logic activities, process mining capabilities are mainly used to discover and analyze emergent behavior. Using a functional decomposition approach, we specified three agent types: cyber-physical controller agent, business rule management agent, and emergent behavior detection agent. We use agent-based simulation of a logistics cold chain case study to demonstrate the feasibility of our approach.
pdf
Utilizing Spatio-Temporal Data In Multi-Agent Simulation
Daniel Glake and Norbert Ritter (University of Hamburg) and Thomas Clemen (Hamburg University of Applied Sciences)
Abstract Abstract
Spatio-temporal properties strongly influence a large proportion of multi-agent simulations (MAS) in their application domains. Time-dependent simulations benefit from correct and time-sensitive input data that match the current simulated time or offer the possibility to take into account previous simulation states in their modelling perspective. In this paper, we present the concepts and semantics of data-driven simulations with vector and raster data and extend them by a time dimension that applies at run-time within the simulation execution or in conjunction with the definition of MAS models. We show that the semantics consider the evolution of spatio-temporal objects with their temporal relationships between spatial entities.
pdf
Simulating Re-configurable Multi-Rovers For Planetary Exploration Using Behavior-based Ontology
Justin Jose, Divye Singh, Amit Patel, and Harshal Ganpatrao Hayatnagarkar (ThoughtWorks Technologies India)
Abstract Abstract
For planetary explorations, the space agencies have usually sent single robotic rovers to complete missions. An alternative approach is to send multiple rovers, which can insure against failure of one or more rovers. Planning for a multi-rover mission has its own challenges, and simulations can aid in identifying and addressing such challenges. In this paper, we present an ontology-based approach to simulate a multi-rover planetary exploration mission, with a focus on resilience, adaptation, heterogeneity, and reconfigurability. We present an ontology that describes multiple rovers along with an inventory of their parts shipped with a lander. Our approach shows that having the ontology-based simulations help in complex scenarios such as to loan parts from inventory, and salvaging a damaged rover for good parts.
pdf
Track Coordinator - Analysis Methodology: Demet Batur (University of Nebraska-Lincoln), Wei Xie (Northeastern University)
Analysis Methodology
Estimation and Fitting
Chair: Guanting Chen (Stanford University)
The Ease of Fitting but Futility of Testing a Nonstationary Poisson Processes from One Sample Path
Barry L. Nelson (Northwestern University) and Lawrence M. Leemis (William & Mary)
Abstract Abstract
The nonstationary Poisson process (NSPP) is a workhorse tool for modeling and simulating arrival processes with time-dependent rates. In many applications only a single sequence of arrival times are observed. While one sample path is sufficient for estimating the arrival rate or integrated rate function of the process---as we illustrate in this paper---we show that testing for Poissonness, in the general case, is futile. In other words, when only a single sequence of arrival data are observed then one can fit an NSPP to it, but the choice of ``NSPP'' can only be justified by an understanding of the underlying process physics, or a leap of faith, not by testing the data. This result suggests the need for sensitivity analysis when such a model is used to generate arrivals in a simulation.
pdf
Unbiased Simulation Estimators for Path Integrals of Diffusions
Best Contributed Theoretical Paper - Finalist
Guanting Chen (Stanford University); Alex Shkolnik (University of California, Santa Barbara); and Kay Giesecke (Stanford University)
Abstract Abstract
We develop and analyze Monte Carlo estimators for functions of
a path integral of a multivariate diffusion process with general
state-dependent drift and volatility. We prove the unbiasedness
of this estimator by extending regularity conditions of the
parametrix method.
pdf
Analysis Methodology
Estimation Methodology
Chair: Wei Xie (Northeastern University)
Steady-State Quantile Estimation Using Standardized Time Series
Christos Alexopoulos, Joseph H. Boone, David Goldsman, and Athanasios Lolos (Georgia Institute of Technology); Kemal D. Dingec (Gebze Technical University); and James R. Wilson (North Carolina State University)
Abstract Abstract
Extending developments of Calvin and Nakayama in 2013 and Alexopoulos
et al. in 2019, we formulate point and confidence-interval (CI) estimators
for given quantiles of a steady-state simulation output process based on
the method of standardized time series (STS). Under mild, empirically
verifiable conditions, including a geometric-moment contraction (GMC)
condition and a functional central limit theorem for an associated
indicator process, we establish basic asymptotic properties of the STS
quantile-estimation process. The GMC condition has also been proved for
many widely used time-series models and a few queueing processes such as M/M/1
waiting times. We derive STS estimators for the associated variance
parameter that are computed from nonoverlapping batches of outputs, and we
combine those estimators to build asymptotically valid CIs. Simulated
experimentation shows that our STS-based CI estimators have the potential
to compare favorably with their conventional counterparts computed from
nonoverlapping batches.
pdf
Quantile Estimation via a Combination of Conditional Monte Carlo and Randomized Quasi-Monte Carlo
Marvin K. Nakayama (New Jersey Institute of Technology); Zachary T. Kaplan (New Jersey Institute of Technology, Google); Yajuan Li (New Jersey Institute of Technology); Bruno Tuffin (Inria-Rennes, University of Rennes); and Pierre L'Ecuyer (Universite de Montreal)
Abstract Abstract
We consider the problem of estimating the p-quantile of a distribution when observations from that distribution are generated from a simulation model. The standard estimator takes the p-quantile of the empirical distribution of independent observations obtained by Monte Carlo. To get an improvement, we use conditional Monte Carlo to obtain a smoother estimate of the distribution function, and we combine this with randomized quasi-Monte Carlo to further reduce the variance. The result is a much more accurate quantile estimator, whose mean square error can converge even faster than the canonical rate of O(1/n).
pdf
A Method for Micro-Dynamics Analysis Based on Causal Structure of Agent-Based Simulation
Hiroaki Yamada (Fujitsu Laboratories Ltd.), Takashi Kato (Fujitsu Kyushu Network Technologies Ltd.), Shohei Yamane and Kotaro Ohori (Fujitsu Laboratories Ltd.), and Shingo Takahashi (Waseda University)
Abstract Abstract
Micro-dynamics analysis plays an important role in decision making in a complex social system. It has been used to analyze how macro-phenomena arise from the viewpoint of individual agent behavior. However, the causes extracted during the analysis often include two types of useless causes: it simple causes, which are not useful for decision making regarding new policies, and it small causes, which suggest inefficient policies. In this paper, we propose a method to extract causes that include at least one feature from the attribute, perception, and action variables of model parameters and logs. We extracted the causes of the specific congestion and created a policy based on results obtained via a simulation of an airport terminal and showed that the proposed method can eliminate both simple and it small causes.
pdf
Analysis Methodology
Output Analysis
Chair: Ben Feng (University of Waterloo)
Reusing Simulation Outputs of Repeated Experiments via Likelihood Ratio Regression
Ben Feng (University of Waterloo) and Guangxin Jiang (Harbin Institute of Technology)
Abstract Abstract
Simulation experiments are usually conducted repeatedly with periodically updated parameters of the stochastic system, and storing the previous simulation experiment data may be helpful for the current simulation experiment. In this paper, we consider how to reuse periodic simulation data to develop high-quality metamodels. We propose a likelihood ratio method to convert the previous simulation data with old model parameters to the simulation data with new model parameters, and a generalized least square regression is used to build the metamodel. Asymptotic variance analysis is provided to show the benefits of reusing previous simulation data in prediction accuracy, and the numerical results show the effectiveness of the proposed method.
pdf
Green Simulation Assisted Reinforcement Learning with Model Risk for Biomanufacturing Learning and Control
Hua Zheng and Wei Xie (Northeastern University) and Ben Feng (University of Waterloo)
Abstract Abstract
Biopharmaceutical manufacturing faces critical challenges, including complexity, high variability, lengthy lead time, and limited historical data and domain knowledge of the underlying system stochastic process. To address these challenges, we propose a green simulation assisted model-based reinforcement learning (GS-RL) method where the bioprocess model risk is quantified by the posterior distribution given observed data and all simulation outputs in the learning process are being efficiently recycled and reused. The main benefit of the proposed method is high computational efficiency, as it simultaneously guides learning and dynamic decision making.
The green simulation likelihood ratio metamodel reuses simulation outputs from previous iterations in a stochastic search algorithm for the optimal policy and the outputs from previous experiments. As such, the quality of gradient estimation is improved and the search for the optimal policy converges faster. Our numerical studies show promising results.
pdf
Metric Learning for Simulation Analytics
Graham Laidler and Lucy E. Morgan (Lancaster University), Barry L. Nelson (Northwestern University), and Nicos G. Pavlidis (Lancaster University)
Abstract Abstract
The sample path generated by a stochastic simulation often exhibits significant variability within each replication, revealing periods of good and poor performance alike. As such, traditional summaries of aggregate performance measures overlook the more fine-grained insights into the operational system behavior. In this paper, we take a simulation analytics view of output analysis, turning to machine learning methods to uncover key insights from the dynamic sample path. We present a k nearest neighbors model on system state information to facilitate real-time predictions of a stochastic performance measure. This model is built on the premise of a system-specific measure of similarity between observations of the state, which we inform via metric learning. An evaluation of our approach is provided on a stochastic activity network and a wafer fabrication facility, both of which give us confidence in the ability of metric learning to provide interpretation and improved predictive performance.
pdf
Analysis Methodology
Metamodeling
Chair: Xi Chen (Virginia Tech)
Uniform Error Bounds for Stochastic Kriging
Guangrui Xie and Xi Chen (Virginia Tech)
Abstract Abstract
In this paper, we propose an approach to construct uniform error bounds (or confidence intervals) for stochastic kriging with a prescribed confidence level. The theoretical development sheds some light on the impact of simulation experimental designs and budget allocation schemes as well as their relative importance on the large-sample properties of stochastic kriging. Through numerical evaluations, we demonstrate the superiority of the uniform error bounds to the simultaneous confidence intervals obtained by applying Bonferroni correction under various experimental settings.
pdf
Stochastic Gaussian Process Model Averaging for High-dimensional Inputs
Maxime Xuereb and Szu Hui Ng (National University of Singapore) and Giulia Pedrielli (Arizona State University)
Abstract Abstract
Many statistical learning methodologies exhibit loss of efficiency and accuracy when applied to large, high-dimensional data-sets. Such loss is exacerbated by noisy data. In this paper, we focus on Gaussian Processes (GPs), a family of non-parametric approaches used in machine learning and Bayesian Optimization. In fact, GPs show difficulty scaling with the input data size and dimensionality. This paper presents, for the first time, the Stochastic GP Model Averaging (SGPMA) algorithm, to tackle both challenges. SGPMA uses a Bayesian approach to weight several predictors, each trained with an independent subset of the initial data-set (solving the large data-sets issue), and defined in a low-dimensional embedding of the original space (solving the high dimensionality). We conduct several experiments with different input size and dimensionality. The results show that our methodology is superior to naive averaging and that the embedding choice is critical to manage the computational cost / prediction accuracy trade-off.
pdf
Efficient Risk Estimation Using Extreme Value Theory and Simulation Metamodeling
Joseph Kennedy, Armin Khayyer, Alexander Vinel, and Alice Smith (Auburn University)
Abstract Abstract
This paper considers a new approach for constructing metamodels for capturing tail behavior in stochastic systems, e.g., simulation outputs. Specifically, we are concerned with the problem of global estimation of conditional value-at-risk (CVaR) surface, given (stochastic) responses from a collection of design points. The approach combines stochastic kriging, which has previously been shown to work well for metamodeling of discrete-event simulation output, with extreme value theory, which is a powerful statistical tool for estimating tail behavior. We present the general methodology and promising results of preliminary computational experiments.
pdf
Analysis Methodology
Rare-event Simulation
Chair: Henry Lam (Columbia University)
On the Error of Naive Rare-Event Monte Carlo Estimator
Yuanlu Bai and Henry Lam (Columbia University)
Abstract Abstract
We consider the estimation of rare-event probabilities using sample proportions output by naive Monte Carlo. Unlike using variance reduction techniques, this naive estimator does not have a priori relative efficiency guarantee. On the other hand, due to the recent surge of sophisticated rare-event problems arising in safety evaluations of intelligent systems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this paper we investigate this naive rare-event estimator, particularly its conservativeness level and the guarantees in using it to construct confidence bounds for the target probability. We show that the half-width of a valid confidence interval is typically scaled proportional to the magnitude of the target probability and inverse square-root with the number of positive outcomes in the Monte Carlo. We also derive and compare several valid confidence bounds constructed from various techniques.
pdf
Rare-Event Simulation for Multiple Jump Events in Heavy-Tailed Levy Processes with Infinite Activities
Xingyu Wang and Chang-Han Rhee (Northwestern University)
Abstract Abstract
In this paper we address the problem of rare-event simulation for heavy-tailed L\'evy processes with infinite activities. We propose a strongly efficient importance sampling algorithm that builds upon the sample path large deviations for heavy-tailed L\'evy processes, stick-breaking approximation of extrema of L\'evy processes, and the randomized debiasing Monte Carlo scheme. The proposed importance sampling algorithm can be applied to a broad class of L\'evy processes and exhibits significant improvements in efficiency when compared to crude Monte-Carlo method in our numerical experiments.
pdf
Comparing Regenerative-Simulation-Based Estimators of the Distribution of the Hitting Time to a Rarely Visited Set
Peter W. Glynn (Stanford University); Marvin K. Nakayama (New Jersey Institute of Technology); and Bruno Tuffin (Inria, University of Rennes)
Abstract Abstract
We consider the estimation of the distribution of the hitting time to a rarely visited set of states for a regenerative process. In a previous paper, we provided two estimators that exploited the weak convergence of the hitting time divided by its expectation to an exponential as the rare set becomes rarer. We now add three new estimators, based on a corrected exponential, a gamma, and a bootstrap approach, the last possibly providing less biased estimators when the rare set is only moderately rare. Numerical results illustrate that all of the estimators perform similarly. Although the paper focuses on estimating a distribution, the ideas can also be applied to estimate risk measures, such as a quantile or conditional tail expectation.
pdf
Analysis Methodology
Simulation and Optimization
Chair: Jose Blanchet (Stanford University)
A Class of Optimal Transport Regularized Formulations with Applications to Wasserstein GANs
Saied Mahdian, Jose H. Blanchet, and Peter W. Glynn (Stanford University)
Abstract Abstract
Optimal transport costs (e.g. Wasserstein distances) are used for fitting high-dimensional distributions. For example, popular artificial intelligence algorithms such as Wasserstein Generative Adversarial Networks (WGANs) can be interpreted as fitting a black-box simulator of structured data with certain features (e.g. images) using the Wasserstein distance. We propose a regularization of optimal transport costs and study its computational and duality properties. We obtain running time improvements for fitting WGANs with no deterioration in testing performance, relative to current benchmarks. We also derive finite sample bounds for the empirical Wasserstein distance from our regularization.
pdf
Distributionally Constrained Stochastic Gradient Estimation Using Noisy Function Evaluations
Henry Lam and Junhui Zhang (Columbia University)
Abstract Abstract
We consider gradient estimation with only noisy function evaluation, where the function can only be evaluated at values lying within a probability simplex. We are interested in obtaining gradient estimators where each (pair of) data collection or simulation run applies simultaneously to all directions at once. Our problem is motivated from the use of stochastic approximation in distributionally robust simulation analysis, which involves solving for worst-case input distributions in a black-box simulation model. In this context, conventional gradient schemes such as simultaneous perturbation face challenges as the required moment conditions that allow the ``canceling" of higher-order error terms cannot be satisfied without violating the simplex constraints. We investigate a new set of required conditions on the probability distribution that governs the perturbation, which leads us to a class of implementable gradient estimators using Dirichlet mixtures. We study the statistical properties of these estimators and demonstrate their effectiveness with numerical results.
pdf
Optimally Tuning Finite-difference Estimators
Haidong Li (Peking University) and Henry Lam (Columbia University)
Abstract Abstract
We consider stochastic gradient estimation when only noisy function evaluations are available. Central finite-difference scheme is a common method in this setting, which involves generating samples under perturbed inputs. Though it is widely known how to select the perturbation size to achieve the optimal order of the error, exactly achieving the optimal first-order error, which we call asymptotic optimality, is considered much more challenging and not attempted in practice. In this paper, we provide evidence that designing asymptotically optimal estimator is practically possible. In particular, we propose a new two-stage scheme that first estimates the required parameter in the perturbation size, followed by running finite-difference based on the estimated parameter in the first stage. Both theory and numerical experiments demonstrate the optimality of the proposed estimator and the robustness over conventional finite-difference schemes based on ad hoc tuning.
pdf
Analysis Methodology
Sampling Methodology
Chair: Kai Liu (University of Prince Edward Island)
Perfect Sampling of Multivariate Hawkes Processes
Xinyun Chen and Xiuwen Wang (School of Data Science, the Chinese University of Hong Kong, Shenzhen)
Abstract Abstract
As an extension of self-exciting Hawkes process, the multivariate Hawkes process models counting processes of different types of random events with mutual excitement. In this paper, we present a perfect sampling algorithm that can generate i.i.d. stationary sample paths of multivariate Hawkes process without any transient bias. In addition, we provide an explicit expression of algorithm complexity in model and algorithm parameters and provide numerical schemes to find the optimal parameter set that minimizes the complexity of the perfect sampling algorithm.
pdf
Path Generation Methods for Valuation of Large Variable Annuities Portfolio Using Quasi-monte Carlo Simulation
Ben Feng (University of Waterloo) and Kai Liu (University of Prince Edward Island)
Abstract Abstract
Variable annuities are long-term insurance products that offer a large variety of investment-linked benefits, which have gained much popularity in the last decade. Accurate valuation of large variable annuity portfolios is an essential task for insurers. However, these products often have complicated payoffs that depend on both of the policyholder’s mortality risk and the financial market risk. Consequently, their values are usually estimated by computationally intensive Monte Carlo simulation. Simulating large numbers of sample paths from complex dynamic asset models is often a computational bottleneck. In this study, we propose and analyze three Quasi-Monte Carlo path generation methods, Cholesky decomposition, Brownian Bridge, and Principal Component Analysis, for the valuation of large VA portfolios. Our numerical results indicate that all three PGMs produce more accurate estimates than the standard Monte Carlo simulation at both the contract and portfolio levels.
pdf
Simulating Nonstationary Spatio-Temporal Poisson Processes Using the Inversion Method
Best Contributed Theoretical Paper - Finalist
Haoting Zhang and Zeyu Zheng (University of California, Berkeley)
Abstract Abstract
We study the problem of simulating a class of nonstationary spatio-temporal Poisson processes. The Poisson intensity function is non-stationary and piecewise linear in both the time dimension and the spatial location dimension. We propose an exact simulation algorithm based on the inversion method. This simulation algorithm adopts three advantages. First, the entire procedure involves only closed-form computation with no need for numerical integration or numerical inversion of any function. Each step in the algorithm only requires exact arithmetic operations. Second, the proposed algorithm is sample efficient, especially compared to the thinning method when the maximum intensity value is much larger than the minimum intensity value. Third, the algorithm generates arrivals sequentially, one at a time in ascending order, so that they can be conveniently fed into real-time or online decision-making tools.
pdf
Aviation Modeling and Analysis
Track Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), John Shortle (George Mason University)
Aviation Modeling and Analysis
Airport and Droneport Operations
Chair: John Shortle (George Mason University)
Carousel Inspired Virtual Circulation: A Simulation Model For UAV Arrival And Landing Procedure Under Random Events
Gregoire Arthur Ky, Sameer Alam, and Vu Duong (Nanyang Technological University)
Abstract Abstract
The current growth in the use of Unmanned Aerial Vehicles has brought to attention the need to develop a corresponding new infrastructure. One of those is the Carousel Inspired Virtual Circulation method, which consists on having Unmanned Aerial Vehicles represented as virtual blocks circulating alongside a virtual closed circuit. The purpose of this paper is to model and simulate this method in a landing configuration for large Unmanned Aerial Vehicles and evaluate its efficacy. Compared to previous works, this simulation will take into account more restrictive parameters and consider a randomized disruption, as well as an emergency landing situation. The results obtained after three runs of the simulation showed that, for each simulation, at least one virtual block landed after running out of battery. Thus, the limits of the method have been identified and further optimization of the landing sequence will be required for future works.
pdf
Reduction of Taxi-related Airport Emissions with Disruption-aware Stand Assignment: Case of Mexico City International Airport
Margarita Bagamanova (Autonomous University of Barcelona) and Miguel Mujica Mota (Amsterdam University of Applied Sciences)
Abstract Abstract
Airport management is often challenged by the task of managing aircraft parking positions most efficiently while complying with environmental regulations and capacity restrictions. Frequently this task is additionally affected by various perturbations, affecting punctuality of airport operations. This paper presents an innovative approach for obtaining an efficient stand assignment considering the stochastic nature of the airport environment and emissions reduction target of the modern air transportation industry. Furthermore, the presented methodology demonstrates how the same procedure of creating a stand assignment can help to identify an emissions mitigation potential. This paper illustrates the application of the presented methodology combined with simulation and demonstrates the impact of the application of Bayesian modeling and metaheuristic optimization for reduction of taxi-related emissions.
pdf
Complex, Intelligent, Adaptive and Autonomous Systems
Track Coordinator - Complex, Intelligent, Adaptive and Autonomous: Saurabh Mittal (MITRE Corporation), Claudia Szabo (The University of Adelaide, University of Adelaide)
Complex, Intelligent, Adaptive and Autonomous Systems
CIAAS - Applications
Chair: Claudia Szabo (University of Adelaide, The University of Adelaide)
Towards Situation Aware Dispatching In a Dynamic and Complex Manufacturing Environment
Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd) and Wentong Cai (Nanyang Technological University)
Abstract Abstract
Dispatch rules are commonly used to schedule lots in the semiconductor industry. Earlier studies have shown that changing dispatch rules that react to a dynamic manufacturing situation improves the overall performance. It is common to use discrete event simulation to evaluate dispatch rules under different manufacturing situations. On the other hand, machine learning method is shown to be useful in learning the relationship of a manufacturing situation and the dispatch rules to generate dispatching knowledge. In this work, we use simulation and machine learning methods to generate dispatching knowledge and define features that are relevant in a dynamic product mix situation. However, more features will increase the risk of overfitting the machine learning model. Hence, dimension reduction methods are explored to reduce overfitting and improve generalization of the model. Simulation results show that this approach can adapt the dispatch rule combination and achieve a comparable factory performance measurement.
pdf
Design and Simulation of a Wide Area Search Mission: An Implementation of an Autonomous Systems Reference Architecture
David King, David Jacques, Jeremy Gray, and Katherine Cheney (Air Force Institute of Technology)
Abstract Abstract
The implementation and testing of autonomous and cooperative unmanned systems is challenging due to the inherent design complexity, infinite test spaces, and lack of autonomy specific measures. Simulation provides a low cost alternative to flight tests, allowing researchers to rapidly iterate on the design before fielding. To expedite this process, an Autonomous System Reference Architecture (ASRA) allows researchers to utilize existing software modules to rapidly develop algorithms for autonomous systems and test them in included simulation environments. In this paper, we implement ASRA on a cooperative Wide Area Search scenario as a test bed to study ASRA's utility for rapid prototyping and evaluation of autonomous and cooperative systems. Through a face centered cubic design of experiments, selected autonomy metrics are studied to provide a response surface model to characterize the system and provide a tool for optimizing mission control parameters and maximizing mission performance.
pdf
A Simulation Model For Volunteer Computing Micro-Blogging Services
Christopher Bayliss, Javier Panadero, Laura Calvet, and Joan Manuel Marquès (Open University of Catalonia)
Abstract Abstract
Micro-blogging services (MBSs) have become increasingly popular during recent decades but most have a poor reputation in terms of privacy of the user data. Volunteer computing enables the implementation of decentralized systems based on heterogeneous resources that are donated by volunteers. However, volunteer computing is characterized by the unreliability of the resources: users are under no obligation to remain online. In this context, we define the problem of designing a directory service policy for a distributed volunteer computing MBS (DVCMBS). This service relies on repositories donated by volunteers. It is managed by a centralized directory service, which stores replicas of the blogs of other users to ensure their online availability and allocates blog replicas to online repositories in order to maximize the availability of all blogs. Likewise, efficiency is essential. We describe a simulation model of a DVCMBS, which includes a parameterized directory service policy. Preliminary numerical results are discussed.
pdf
Complex, Intelligent, Adaptive and Autonomous Systems
CIAAS - Theory
Chair: Wentong Cai (Nanyang Technological University)
Risk-Based A*: Simulation Analysis of a Novel Task Assignment and Path Planning Method
Maojia P. Li, Michael E. Kuhl, Rashmi Ballamajalu, Clark Hochgraf, Raymond Ptucha, Amlan Ganguly, and Andres Kwasinski (Rochester Institute of Technology)
Abstract Abstract
This paper addresses the task assignment and path planning (TAPP) problem for autonomous mobile robots (AMR) in material handling applications. We introduce risk-based A*, a novel TAPP method, that aims to reduce conflict and travel distance for AMRs considering system uncertainties such as travel speed, turning speed, and loading/unloading time. An environment simulator predicts the distribution of future locations for each AMR and constructs a probability map for future AMR locations. A revised A* algorithm generates low-risk paths based on the probability map. A discrete event simulation experiment shows our model significantly reduces the number of conflicts among robots in stochastic systems.
pdf
Use of Simulation-aided Reinforcement Learning for Optimal Scheduling of Operations in Industrial Plants
Satyavrat Wagle and Aditya Avinash Paranjape (Tata Consultancy Services Ltd)
Abstract Abstract
In this paper, we present an algorithm based on reinforcement learning for scheduling the operations of an industrial plant which is modeled as a network of machines on a directed acyclic graph. The algorithm is assumed to have access to a high-fidelity simulator of the plant, but not a mathematical model. The algorithm is designed to optimize an objective function over a moving window, similar to receding horizon control, for a typical industrial plant which converts raw material into finished products. The delivery schedule for the incoming raw material is assumed to be known but subject to uncertainty. A novel feature of our technique is the use of schedule moments to train the algorithm to handle a large class of incoming delivery schedules.
pdf
Track Coordinator - Commercial Case Studies: Martin Franklin (MOSIMTEC, LLC), David T. Sturrock (Simio LLC)
Commercial Case Studies
Digital Transformation I
Chair: David T. Sturrock (Simio LLC)
Simulating the Impact of Artificial Intelligence Innovations with a Modular Framework and Digital Twin
Laura Kahn and Ian McCulloh (Accenture Federal Services)
Abstract Abstract
U.S. federal government agencies oversee a wide array of citizen benefits, which affect millions of Americans. Federal benefits administration is a complex interaction of systems that can be approximated with a modular framework and digital twin. Rather than focusing on individual elements of the benefits administration operation, we aim to minimize interface issues between the elements by modeling the entire operation using a holistic modular framework. We also present a digital twin discrete event simulation of the benefits administration system to measure how much new Artificial Intelligence (AI) technologies improve government services.
pdf
Using the Digital Twin of an Educational Robotic Cell during Pandemic
Thomas Martin Rudolf and Luis Antonio Moncayo Martinez (Instituto Tecnologico Autonomo de Mexico, ITAM)
Abstract Abstract
During the 2020 pandemic caused by COVID-19, universities face the problem of how to teach laboratories without using the university installations. At the ITAM, there is one specific lab that teaches students how to plan and program a production line using machine tools, robots and conveyors equipped with sensors and actuators controlled by PLC. During the pandemic, we built up a digital twin of the same robotic cell using SIMIO and other simulation tools to provide the experience of planning and improving the production line virtually.
pdf
Commercial Case Studies
Digital Transformation II
Chair: Martin Franklin (MOSIMTEC, LLC)
Client Experience Transformation: From the Art of Management to the Science of Digitalization
Sreekanth Ramakrishnan (IBM) and Faisal Aqlan (Penn State Behrend)
Abstract Abstract
Client Experience (CX) is a discipline that is gaining traction across enterprises. CX is vital for the survival of today’s organizations due to the increased market competition, which makes product differentiation a massive challenge. To provide excellent CX, companies need to understand their customers’ expectations, determine how and to what extent an experience-based business can create growth, and continuously improve and sustain CX by utilizing data analytics and fact-based decisions. This case study integrates the art of CX management with the science of digitalization to enable a seamless experience for clients.
pdf
Virtual Lab: A Framework for Modeling Decisions in R & D at Bayer Crop Science
Shrikant Jarugumilli, Yiou Wang, Yao Nie, Anirudha Kulkarni, and Meng Liu (Bayer); Jeff Woessner, Megan Castle, Dave Baitinger, and Jennifer Becker (Bayer Crop Science); and Jesus Jimenez (TalentWave)
Abstract Abstract
This paper summarizes the applications of the “Virtual Lab,” a generic computer simulation framework for modeling both strategic and operational decisions within Bayer Crop Sciences. The Virtual Lab framework generates a computer simulation model that is configurable to different business scenarios by enabling what-if analyses. The modeling architecture combines discrete-event simulation and agent-based simulation. The applications of the simulation framework include lab capacity analysis, shift planning, batch-sizing analysis, job sequencing and scheduling. The presentation will explain how this framework can be used in an R&D function; it will also discuss the challenges and opportunities of building, validating and implementing the framework from a practitioner’s viewpoint.
pdf
Commercial Case Studies
Capacity Analysis
Chair: George Miller (MOSIMTEC)
Simulation Capacity Analysis for the Carrapateena Block Cave
Colin M. Eustace (Polymathian), Daniel Lagacé (OZ Minerals), and Lewis J. Bobbermen (Polymathian)
Abstract Abstract
Underground mining operations have recently commenced at Carrapateena, which is one of the largest copper reserves in Australia. OZ Minerals are currently studying an expansion to a block caving operation for the lower portion of the orebody. This involves undermining the orebody so that it collapses and breaks up under its own weight and then extracting the ore from an array drawbells on the production level.
With production operations concentrated on a single level, operational complexities and constraints can have a significant effect on overall mine performance and output. Simulation of production operations including: loader interactions; drive availability; secondary break and drawbell constraints was used to identify potential operational constraints for difference stages in the life of the block caving operation. Alternative production level layouts, equipment types and operational methodologies were evaluated to guide refinement of the block cave design for following detailed design studies.
pdf
Simulating an Automated Breakpack System in a Walmart Distribution Center
Amy Greer (MOSIMTEC, LLC) and Scott Ponsford and Sean Martin (Walmart Canada)
Abstract Abstract
This case study focuses on the simulation of a soon-to-be implemented automation system within a Walmart Canada distribution center. Many SKUs cannot be sent to retail stores in full case quantities, as they are slow movers and would require individual stores to carry excessive inventory. Breakpack is the processes of breaking cases down to individual eaches and combining them into mixed SKU cartons. Automating breakpack offers significant labor and quality savings, but also a high degree of complexity. SKUs should be grouped to minimize labor during the store put-away process, while also attempting to minimize labor and transportation cost for the DC and overall supply chain. This presentation will review the simulation model used to help the retailer understand the SKU profile that should be used for breakpack automation, understand the best way to schedule the decanting operation, and understand the store friendliness of cartons generated by the system.
pdf
Evaluating Workers Allocation Policies Through the Simulation of a High Precision Machining Workshop
Maude Beauchemin, Jonathan Gaudreault, and Ludwig Dumetz (Université Laval) and Stéphane Agnard (APN GLOBAL)
Abstract Abstract
Classic machining workshops assign one worker per machine. However, the production line we work with is highly connected and we now see the possibility to allocate more dynamically the tasks to the workers. A discrete-event simulation model of the metal parts manufacturing production line is built in order to test different allocation policies. We measure how more advanced policies lead to increased efficiency.
pdf
Commercial Case Studies
Manufacturing
Chair: Haiping Xu (Northrop Grumman Corporation)
A Virtual Learning Factory for Advanced Manufacturing
Faisal Aqlan (Penn State Behrend), Richard Zhao (University of Calgary), Hui Yang (Pennsylvania State University), and Sreekanth Ramakrishnan (IBM Corporation)
Abstract Abstract
Virtual reality (VR) technology allows for the creation of fully immersive environments that enable personalized manufacturing learning. This case study discusses the development of a virtual learning factory that combines manual and automated manufacturing processes including welding, fastening, 3D printing, painting, and automated assembly. Two versions of the virtual car factory were developed: (1) a multiplayer VR environment for design and assembly of car toys, which allows for the collaboration of multiple VR users in the same environment, and (2) a virtual plant that involves heavy machinery and automated assembly lines for car manufacturing. The virtual factory also includes an intelligent avatar that can interact with the users and guide them to the different sections of the plant. The virtual factory enhances the learning of advanced manufacturing concepts by combining virtual objects with hands-on activities and providing students with an engaging learning experience.
pdf
Progressive Assembly Simulation for the Final Assembly and Tests of Two Products
Haiping Xu and Nicholas D. Andrews (Northrop Grumman Corporation)
Abstract Abstract
Northrop Grumman Corporation was facing the growths of customer demands for two of its key products. A cross functional team had worked together to design a Progressive Assembly for Product A/B final assembly and test in the production facility to meet the volume of customers’ demands. Simulation studies were requested to support multiple phases of production demands during the course of five years. The factory simulation engineer developed process simulation models to help the progressive assembly line project team quickly verify its design. The collected simulation results assisted the progressive assembly line project team in verifying and improving its design, helping managers to make data driven decisions of production schedule, eliminate or mitigate risk, and allocate funding for additional equipment. The simulation models and their input data were modified and re-used after the progressive assembly line was launched to further assist project teams in their continuous improvement efforts.
pdf
Use of Discrete Event Simulation to Inform Capital Expenditures
Bryan Sydnor (Science of Manufacturing)
Abstract Abstract
Schedules provide an invaluable data set for project management - the interconnection of tasks and their durations, forward and reverse path calculations, resource allocation, costs, etc. Schedules begin to suffer when tasks or sub-projects don’t have a predecessor or have inherently long durations containing significant free slack with consequences for finishing late or penalties for finishing early. Expanded manufacturing is often tied to a sales forecasts or expecting customer(s), don’t deliver on schedule and the loss of customers could be significant. The use of Discrete Event Simulation (DES) techniques to augment schedule data, incorporate task variances and build output distributions allows for a more informed project start date. This case study examines how to structure a DES inputs and outputs to help schedule the major components of expanding manufacturing; equipment and tooling, facility modifications, and initial build.
pdf
Commercial Case Studies
Transportation I
Chair: Khaled Mabrouk (Sustainable Productivity Solutions)
Valley Recycling Designs New Facility Using Simulation
Jacquelin Salinas (Valley Recycling) and Khaled Mabrouk (Sustainable Productivity Solutions)
Abstract Abstract
Valley Recycling is opening a new recycling operation adjacent to their current facility near downtown San Jose. The new site is expected to allow Valley Recycling to process significantly greater volume of trucks unloading recycling material. Prior to opening the new site, Valley Recycling utilized simulation and industrial engineering support provided by Sustainable Productivity Solutions to determine how to best streamline truck movement on this new site so as to:
1) Avoid having trucks stretching out onto the main road which would incur traffic fines from the City of San Jose,
2) Avoid having to turn trucks away,
3) Avoid hiring additional employees, after startup, to make system work,
4) Optimize flow so as to maximize volume of trucks processed, and
5) Increase customer satisfaction by minimizing time spent waiting in line.
pdf
Simulation Model to Select an Optimal Solution for a Milk Run Internal Logistic Loop: Case Study
Bozena Mielczarek (Wroclaw University of Science and Technology) and Jacek Sachanbiński (Johnson Mattey Battery Systems)
Abstract Abstract
A discrete event simulation (DES) model was built to develop an optimal milk run strategy for collecting finished products from the production area and transporting them to the dispatch warehouse. In the current system, products are picked up manually by warehouse employees using hand pallet trucks. This system is vulnerable to delays, particularly delayed pallet collection, thus making it difficult to organize the team’s work in the warehouse. The first objective of this case study was selecting the parameters for a new system that would guarantee the smooth execution of receiving and delivering finished products to the dispatch warehouse without disturbing the production lines. The second objective was to determine the amount of resources that need to be committed to achieve the desired efficiency.
pdf
Synthetic Trip List Generation for Large Simulations
Ben Frederick M. Intoy, George Panteras, Kevin Liberman, and Trey LaNasa (Deloitte)
Abstract Abstract
Decision-makers use large simulations to plan the future of complex systems. Here we use the example of transportation networks where the accuracy of the simulation model is highly dependent upon an accurate representation of the users’ behavior in the network. Such user data may be sparse, private, or difficult to obtain and would have to be generated synthetically using available data. We present a method to synthetically generate user travel schedules which are then used in a massive scale agent-based model to make informed decisions and their future impacts.
pdf
Commercial Case Studies
Transportation II
Chair: Jeremy O'Donnell (Model Performance Group)
Simulating Warehouse Operations: Goods to Person Picking Using a Multi Level Shuttle
Matthew Hobson-Rohrer (Roar Simulation) and Juergen Baumbach (Logistics Automation systems and Technology)
Abstract Abstract
Order fulfillment operations, including E-Commerce, are under constant pressure to deliver high customer service levels. Companies like Amazon continue to raise the bar of online customer expectations, driving other companies to adapt to the rapidly changing world of online commerce. Traditional picking systems are labor intensive and may not always be the best solution for handling certain order types. Some fulfillment center managers are employing automation to increase pick rates and to handle the large number of products. Goods to Person (GTP) stations can be configured with multi-level shuttle systems to meet this need. Roar Simulation and Logistics Automation system and Technology (LAsT) have recently worked on several Shuttle-GTP projects. This case study is a summary of some of those projects, showing the value that a simulation model can provide when making the decision to purchase automation, and also on how to operate the equipment more efficiently as business changes.
pdf
A Streamlined Approach for Campus Bus Routing within SIMIO
Amy Brown Greer and Yusuke E. T. Legard (MOSIMTEC, LLC) and Joseph Wolski (National Institutes of Health (NIH))
Abstract Abstract
Many simulation packages offer out-of-the box objects that cover a wide variety of situations. However, using these pre-built constructs can often result in more code, slower running models, and harder to maintain models than building custom, targeted objects from scratch. In this presentation, the technical approach for a shuttle bus model for the National Institutes of Health will be discussed in detail. The techniques used in this model, particularly a focus on generating smart data structures on initialization and using custom built objects, can be applied to modeling in any industry.
pdf
Commercial Case Studies
Foods/Agriculture
Chair: Caleb Whitehead (Simio LLC)
Virtual Factory for Corn Seed Manufacturing Facilities
Jennifer Becker, Shannon Hauf, and Chuck Johnson (Bayer Crop Science); Shrikant Jarugumilli, Tzai-Shuen Chen, and Akhil Arora (Bayer); Jane Kaiser and Kathryn McQueen (Bayer Crop Science); Xueping Li (XP Innovations); and Ronald G. Askin (TalentWave)
Abstract Abstract
This case study presents an overview of the Virtual Factory developed for North America Corn Manufacturing Facilities at Bayer Crop Sciences. In this talk, we will cover: process complexity of corn manufacturing, input data analysis (equipment data from OEE, MES, & the data historian and production plans), output analysis, and an overview of various business scenarios. We will conclude our talk by providing our perspectives on the various challenges and opportunities from a practitioner’s viewpoint.
pdf
Refrigerated Pallet Order Fulfillment: Evaluating a Fully Automated Facility Using Simulation
Matthew Hobson-Rohrer (Roar Simulation) and Jason Perks (viastore Systems)
Abstract Abstract
A large multinational food processing and distribution company is evaluating automation concepts for facilities in the U.S. The facilities receive pallets for 1000s of products, store them, build mixed-case pallets to customer order, and then ship those pallets to other distribution centers and end customers. The automation design is developed by viastore systems, a leading international provider of automated solutions. Roar Simulation built the simulation model. The automated system included several functional areas tied together with a pallet handling monorail system. Simulation was instrumental in validating and refining the design, identifying system constraints, and in determining the number of ASRS machines and monorail vehicles required during peak and average demand periods. Since most of the storage area was Frozen and functional areas are at a chilled temperature, the simulation model also tracked the cold chain for pallets having to exit and then re-enter the different temperature zones in the facility.
pdf
High Accuracy Discrete Rate and Reliability Modeling to Drive Improvement of Plant OEE and Throughput
Lawrence B. Fischel (Clorox Services Company) and Thomas J. Lange (Technology, Optimization, and Management)
Abstract Abstract
For Hidden Valley Ranch salad dressing, increased demand required increased production capacity. Rather than obtain additional equipment, improved efficiency was sought using modeling and simulation. Using existing, historical plant data for line event status in JMP Statistical software, failure-mode-specific uptime and downtime distributions were obtained. Using these distributions in an ExtendSim discrete rate and reliability model, the simulation matched the actual data within 1% Overall Equipment Effectiveness (OEE). This high accuracy model enabled prioritization of equipment and procedural improvements and exploration of product selection, run rate, and buffer size changes. Several counterintuitive improvements were identified. Even though increasing the production rate also increases the failure rate, the overall throughput increases. Frequent, short duration stoppages might seem innocuous; however, the integrated cooperativity of the production line magnifies the effects. Visually understanding the impact of their actions on the line stimulated increased vigilance as well as increased agency in the operations staff.
pdf
Commercial Case Studies
Supply Chain
Chair: Matthew Ballan (North Carolina State University)
Carbon Capture and Storage Supply Chain
Rienk Bijlsma (Systems Navigator)
Abstract Abstract
Carbon capture and storage (CCS) is the process of capturing waste carbon dioxide (CO2) usually from large point sources, transporting it to a storage site, and depositing it where it will not enter the atmosphere, normally an underground geological formation. Due to the high investment and operating costs for CCS supply chains, this technology has not been widely adopted, but this is bound to change as emission rights become more expensive. Pipelines in combination with marine transport of liquified CO2 are seen as an effective solution for CCS that is both flexible, as well as capable to economically move large volumes to a storage site. Accurately predicting the networks costs is crucial in making the best design decisions for the supply chain. Given the network interaction, variability, storage requirements and operating policies, simulation is a great technology to accurately predict the supply chain cost range for various design choices.
pdf
Predicting Supply Chain Performance Under Rapid Unplanned Demand Fluctuation
Adam Graunke and Sebastian Urbina (Genpact)
Abstract Abstract
Due to COVID19, a multinational Consumer Packaged Goods (CPG) producer is experiencing significantly altered product demand profiles, with demand for some products surging and others dropping significantly. Required production and inventory levels are unknown, resulting in cost uncertainty. A simulation-based analytics capability was rapidly developed and deployed to estimate production and inventory levels required to meet this dynamic demand, utilizing existing supply chain network structure, production constraints, current inventory status, and inventory replenishment policies. This data-driven model generates outputs such as production volumes, inventory levels, and costs by region and product category, which are integrated into an analytics dashboard for near-term planning and awareness. The model and dashboard enabled the company to identify products and production plants at risk for overproduction including the estimated cost impact, and predicted required production rates and inventory levels needed to support dynamic demand on a weekly basis.
pdf
Evaluation of Supply Chain Strategy for a Heavy Equipment Manufacturer
Adam Graunke and Sebastian Urbina (Genpact)
Abstract Abstract
A global heavy equipment manufacturer is concerned that their make-to-order strategy is creating long lead times and low availability of product mix, which in turn is negatively impacting sales. To address this concern, the company has proposed a segmented supply chain strategy, with high-demand products made to stock and custom products made to order. Genpact was tasked with evaluating the proposed strategy change, and a discrete event simulation model was developed and deployed that identified a set of optimal supply chain policies. The recommended supply chain policies improved the key lead-time metric performance from 30% to 85% with no increase in inventory costs. Furthermore, the model demonstrated that demand forecast accuracy and production improvements could increase lead time metric to 93% and decrease inventory costs by 50%.
pdf
Commercial Case Studies
Scheduling
Chair: Jeremy O'Donnell (Model Performance Group)
Crane Scheduling at Steel Converter Facility Using Dynamic Simulation and Artificial Intelligence
Sparsh Choudhary, Amit Kumar, and Sumit Kumar (ITC Infotech)
Abstract Abstract
The overhead crane scheduling problem has been of interest to many researchers and lot of approaches are available to solve the problem. While most approaches are optimization-based, some also use a combination of simulation and optimization. We have used a combination of dynamic simulation and reinforcement learning (RL) based artificial intelligence (AI) to suggest the movement of cranes with an objective of increasing the throughput of a steel converter facility.
pdf
Prediction of Lot Step Arrival Times in Semiconductor Manufacturing
Stephen Muvley (Applied Materials)
Abstract Abstract
The challenge of making better dispatching and scheduling decisions in terms of bottleneck tool area management and the optimization of batch sizes can be costly and difficult to implement. Current methods using average cycle-time or queue-time controls do not fully represent the current state and true capacity of the fab. In order to improve the effectiveness of downstream productivity processes such as area schedulers or dispatching policy improvements, accurate lot step arrival time predictions are required. Applied Materials, along with a large 300MM semiconductor device manufacturer based in Asia, recently deployed (2019) an integrated prediction engine along with optimization based area scheduling solution to improve bottleneck area throughput with flexibility to adapt to changing business needs and operational scenarios. The following use case will present the solution, benefit and results of the prediction engine deployment.
pdf
Applied Smartfactory Planning, Scheduling and Dispatching Solutions for Semiconductor Manufacturing
Madhu Mamillapalli (Applied Materials)
Abstract Abstract
Semiconductor manufacturing facilities have historically considered operations as straight forward decision-making processes and relied on the traditional ERP systems and excel spreadsheets for their Planning and Scheduling processes. However, with increasing complexity and demand changes, the inaccuracy and run time of these processes have increased steeply leading to customer dissatisfaction. Similar was the case with a large backend Assembly and Test (AT) manufacturing company who were experiencing long planning commit cycles (1 week) and inaccurate scheduling sequence leading to a drop in On Time Delivery percentage. Existing ERP and advanced planning systems failed to comprehend the complexity making them inaccurate, unstable and cumbersome. Applied Materials, deployed a proof of concept at this AT facility using 2/3rd of the production volume to generate a fast and accurate Commit plan with a detailed lot level Scheduling sequence and dispatch list. The following use case will present the solutions, performance, benefits and results.
pdf
COVID-19 Case Studies
Testing Strategies
Chair: Vivek Bhatt (Ahmedabad University)
Group Testing Enables Asymptomatic Screening for COVID-19 Mitigation: Feasibility and Optimal Pool Size Selection with Dilution Effect
Yifan Lin, Yuxuan Ren, and Jingyuan Wan (Georgia Institute of Technology); Massey Cashore, Jiayue Wan, Yujia Zhang, and Peter I. Frazier (Cornell University); and Enlu Zhou (Georgia Institute of Technology)
Abstract Abstract
Group testing pools multiple samples together and performs tests on these pooled samples to discern the infected samples. It greatly reduces the number of tests, however, with a sacrifice of increasing false negative rates due to the dilution of the viral load in the pooled samples. Therefore, it is important to balance the trade-off between number of tests and number of false negatives. We compare two popular group testing methods, namely linear array (a.k.a. Dorfman’s procedure) and square array methods, and analyze the optimal pool size of a pooled sample that minimizes the number of false negatives per person under the constraint of testing capacity. We consider testing a closed community and determine the optimal testing cycle length that minimizes the final prevalence rate of infection at the end of the time period. Finally, we provide a testing protocol for practitioners to use these group testing methods in COVID-19 pandemic.
pdf
Testing-based Interventions for COVID Pandemic Policies
Eva Regnier, Susan M. Sanchez, and Paul J. Sanchez (Naval Postgraduate School)
Abstract Abstract
Testing and test-based interventions are critically important in managing the virus that causes COVID-19 because people who are infected can transmit the virus when they have no symptoms. We develop simulation-based tools to help assess testing-based interventions for COVID management.
pdf
COVID-19 Case Studies
Impact of COVID-19 on Supply Chains
Chair: Anastasia Anagnostou (Brunel University London)
Simulation Optimization Approach for Reconfiguration of the Perishable Food Supply Chain During Disease Outbreak
Anchal Patil, Vipulesh Shardeo, Ashish Dwivedi, and Jitender Madaan (Indian Institute of Technology Delhi)
Abstract Abstract
Considering the impact of COVID-19 prevention policies on the perishable food supply chain, this work explores the effect of network reconfiguration to reduce the risk of spread. The Indian capital, Delhi have seven big aggregator markets responsible for the redistribution of perishable food. These markets may have acted as a hotspot for the disease. Thus, we adopted agent-based modelling to examine the potential locations for the ad-hoc markets. Next, we propose to optimize the reconfigured network considering the travel time and cost, product quantity and quality and establishment cost of the new market.
pdf
Integrating Agility, Volatility and Sustainability Perspectives: A Case Study for an Effective Supply Chain Model under COVID-19
Mohammad Shamsuddoha and Tasnuba Nasir (Western Illinois State University)
Abstract Abstract
A useful supply chain model is required for a sustainable business to maintain optimum production, profitability, market shares under the COVID-19, or any pandemic situation. Entrepreneurs are trying hard to integrating agility, smooth dealing with a volatile market, and practicing good/best sustainability indices to ensure maximum benefits in all aspects. A large case farm is chosen to identify the current processes and build a simulation model accordingly. This model will add all the relevant variables on agility, volatility, and sustainability. Later, the simulation model will be experimented with several inputs and desired outputs to determine optimality in profit, productions, market share, and the likes. Finally, the result will be disseminated to the case industry for further extensions and amendments to implement their existing operation to achieve sustainable outputs.
pdf
Forecasting Supply Chain Impact by Predicting Governmental Decisions in the COVID-19 Pandemic
Pauline Kienzl (Infineon Technologies AG), Hans Ehm (Infineon Technologies), and Abdelgafar Hamed (Infineon Technologies AG)
Abstract Abstract
During the pandemic, semiconductor companies’ supply chain impacts were largely defined by
governmental decisions that affected transit times. Later, one of the major impacts came from demand reductions for semiconductors. Therefore, a System Dynamics (SD) Model that investigates the interdependence between infections and governmental strictness in restrictions using an extended SEIR model was developed in AnyLogic. For the quantification of governmental measures, the Oxford Blavatnik University coronavirus government response tracker was used. Based on this conceptual output, the resulting transit time increases have been connected with the strictness of these measures. The model links reductions in semiconductor demand to the duration of lockdowns as indicated by the current measures. The findings show that demand shocks and transit time delays can be buffered using flexible capacities and safety stocks.
pdf
COVID-19 Case Studies
COVID Applications
Chair: John Shortle (George Mason University)
Utilizing Simulation to Evaluate Shuttle Bus Performance under Passenger Counts Impacted by COVID-19
Yusuke Legard and Nathan Ivey (MOSIMTEC, LLC) and Antonio Rodriguez and Joseph Wolski (National Institutes of Health (NIH))
Abstract Abstract
As with many organizations, the National Institutes of Health has seen a dramatic shift to remote work due to the COVID-19 pandemic. The NIH headquarters in Bethesda, Maryland operates a shuttle bus system to take employees between key buildings, along with transporting employees from off-site locations near Metro stations. NIH has utilized simulation modeling to understand the impact of shifting bus schedules under varying passenger demand. This simulation tool can be used to understand how bus schedules may need to be altered to accommodate staggered work patterns and how bus frequency should increase as workers begin returning to the NIH campus.
pdf
Distribution of PPE in Brazil
Rienk Bijlsma (Systems Navigator, Paragon)
Abstract Abstract
Brazil imports Personal Protective Equipment (PPE) consumables from China into Sao Paulo airport, from where it is distributed to hospitals across the country. With the COVID-19 pandemic, getting these materials on the right location has become a mission-critical exercise that can save lives of both patients and medical staff. An online planning solution assists the Brazilian government in making the right decisions on which shipments go where. Using advanced data analytics & simulation, stock levels across the country are predicted into the future, ensuring the best distribution strategy is chosen and materials are always available when they are required. Using Scenario Navigator’s advanced scenario management, planners can easily create alternative plans in order to mitigate any unforeseen material shortages.
pdf
A Prototype System for Clustering Covid-19 Research Papers
Abdolreza Abhari and Mahfuja Nilufar (Ryerson University)
Abstract Abstract
We build a COVID-19 database where around 40K papers are added into the system along with the clustering of the related papers. The clustering has been done by two popular NLP models. The goal is building a database software that compares the whole body of all COVID-19 related papers to find similar ones. We developed a prototype by considering abstracts, titles, and the full body of papers. Simulating different searching scenarios and achieving similarity scores evaluated by the microsoft academic similarity tool show abstract processing outperforms title processing. The results achieved by the developed prototype also proves the correctness of another hypothesis, which is integrating database search features and NLP methods to compare the whole body of papers can increase the similarities even more. However, the time spent on creating the clusters shows the scalability of ideal software that can process COVID-19 papers continuously is a significant challenge.
pdf
COVID-19 Case Studies
Emergency Department Operations under COVID-19
Chair: Edward Williams (PMC)
Data-Driven Staffing Decision-Making at a Large Emergency Department in Response to COVID-19
Shi Tang (Southern Methodist University); Alba Rojas-Cordova (Janssen Research and Development, LLC); and Samuel McDonald, Jakub Furmaga, Carl Piel, Mark Courtney, and Deborah Diercks (UT Southwestern Medical Center)
Abstract Abstract
Resource shortages and long waiting times across emergency departments (EDs) in the United States will likely worsen due to high volumes of COVID-19-like illness (CLI) patients. We build a discrete-event simulation model to capture a large ED's operations and examine the impact of CLI on the ED throughput. We statistically analyze large datasets of actual standard and CLI patient encounters to define the model’s input and validate its output. We compare the performance of five different staffing options, focusing on length of stay (LOS) and number of left without being seen (LWBS), under multiple standard and CLI patient volumes. Interestingly, we find that including an additional provider floating between standard patient ED care spaces leads to the most robust decrease in LOS and LWBS rates for both discharged and admitted patients, whereas adding an extra provider to CLI-dedicated ED care spaces had a small impact compared to the baseline staffing.
pdf
Evaluating Patient Triage Strategies for Non-Emergency Outpatient Procedures under Reduced Capacity Due to the COVID-19 Pandemic
Adam VanDeusen, Che-Yi Liao, Advaidh Venkat, Amy Cohn, Jacob Kurlander, and Sameer Saini (University of Michigan)
Abstract Abstract
The COVID-19 pandemic impacted the healthcare system in many ways, including the cancellation or deferral of non-urgent medical appointments due to systems reducing capacity to keep patients safe and abide by governmental orders. We develop a discrete-event simulation to model how a clinical facility that has reduced capacity for non-urgent appointments may triage patients to either alternative or delayed appointment options. Additionally, our model considers tiered reopening stages, in which appointment capacity is incrementally added back as restrictions are loosened. We apply our model to colonoscopy procedures at a Veterans Affairs clinic in Ann Arbor, Michigan. We consider patients at different risk levels who arrive each week and are seen by providers with the highest priority patients being seen first and lower priority patients waiting in a queue. We evaluate metrics including average patient wait time and number of patients who wait longer than a designated number of weeks.
pdf
How to Evacuate an Emergency Department During Pandemics: A COVID-19 Agent-Based Model
Fardad Haghpanah, Kimia Ghobadi, and Benjamin W. Schafer (Johns Hopkins University)
Abstract Abstract
Evacuation of patients during a pandemic is a complicated process. Some patients may be infectious, some may be considered Persons Under Investigation (PUI) with pending test results, some staff might be wearing Personal Protective Equipment (PPE) which restricts movement, all the while, additional infection control protocols might be in place to prevent further transmission. Modeling and simulation can help emergency planners by providing an estimation of intermediate and final evacuation times for different groups of patients. These results can provide insights for emergency evacuation planning or inform strategic decisions such as the location of PUI areas. In this study, we developed an agent-based model to simulate the evacuation of the emergency department at the Johns Hopkins Hospital during the COVID-19 pandemic. The results show a larger nursing team can reduce the average and maximum probable evacuation times by 12 and 19 minutes, respectively.
pdf
COVID-19 Case Studies
Mitigating and Measuring the Effects of COVID-19
Chair: Edward Williams (PMC)
Designing for Distance: COVID-19’s Impact on a Los Angeles Vote Center
Nicholas D. Bernardo (The University of Rhode Island), Jennifer Lather (University of Nebraska-Lincoln), and Gretchen A. Macht (The University of Rhode Island)
Abstract Abstract
Due to the outbreak of COVID-19, concerns regarding public health and safety extend directly to elections; thus, in-person voting imposes new challenges for election administrators. This case study applies discrete-event simulation modeling to a COVID-19 election system and demonstrates that designing for processing changes, such as social distancing and equipment sanitization, differs from traditional elections. The separation of provisional voter check-ins, which reduced average time-in-system (ATS) and maximum time-in-system (MTS) in previous models, increased ATS (i.e., 54-65 minutes) and MTS (i.e., 75-100 minutes) in COVID models. When provisional check-ins were separated and check-in stations were relocated toward the vote center entrance, the ATS and MTS were significantly reduced (i.e., 9-19 minutes and 4-32 minutes, respectively). These findings indicate that election systems operating during COVID-19 require specific considerations rather than generalized recommendations.
pdf
Analyzing Covid-19 Control Strategies in Metropolitan Areas: A Customizable Agent-Based Simulation Tool
Connor Speir and Ashkan Negahban (Pennsylvania State University)
Abstract Abstract
With the rapidly changing dynamics and understanding of Covid-19, the need to analyze the efficacy of possible mitigation strategies has never been higher. Such strategies may include social distancing, mask-wearing, school/business closures, random testing, and quarantines of differing lengths. We develop an agent-based simulation tool that can be customized to simulate any city chosen. Data regarding distribution of household size, age groups, commute patterns, preexisting health condition prevalence, and school and business assignments are used as inputs. In this presentation, we calibrate the simulation for the case of New York City as a test bed as the city soon became the epicenter of the outbreak in the United States. The simulation tool will be made publicly available and can be used by government officials and other decision makers as a decision support tool to perform what-if analysis and design effective mitigation strategies for metropolitan areas based on various metrics.
pdf
Utilizing Bayesian Methods for COVID-19 Forecast and Statistical Inference
Gary Lin, Alisa Hamilton, Yupeng Yang, and Oliver Gatalo (Center for Disease Dynamics, Economics & Policy); Anindya Bhaduri (Johns Hopkins University); and Eili Klein (Center for Disease Dynamics, Economics & Policy; Johns Hopkins University)
Abstract Abstract
Given the continued threat of COVID-19, policymakers rely on computational models to provide statistical forecasts of deaths, hospitalizations, and case counts in order to make large-scale decisions. Another utility of models is to determine the impact of policy decisions on mitigating spread and hospitalizations. Bayesian methods achieve these objectives by providing likely forecast trends while also allowing for inference on parameters that are fitted to actual data. We utilized a data-driven, compartmental model to forecast COVID-19 trends and conduct inference on parameter values that directly translate to policy decisions. Additionally, we can use Bayesian inference to quantify the uncertainty in the parameter estimation as well as forecasted trends that can inform testing strategies and future data collection efforts.
pdf
COVID-19 Case Studies
Modeling the Spread of COVID-19
Chair: Chukwudi Nwogu (Brunel University London)
CityCOVID: A Computer Simulation of COVID-19 Spread in a Large-Urban Area
Charles M. Macal, Jonathan Ozik, Nicholson T. Collier, Chaitanya Kaligotla, Margaret M. MacDonell, Cheng Wang, David J. LePoire, Youngsoo Chang, and Ignacio J. Martinez-Moyano (Argonne National Laboratory)
Abstract Abstract
CityCOVID is a city-scale agent-based model of millions of people in a large metropolitan area, currently the Chicago area. CityCOVID is being used to understand the possible spread of COVID-19 and to model the uncertainties of human behavior in response to public health interventions. This paper describes the model and its application in the COVID-19 crisis, and its use to support decision making at the city, county and state levels.
pdf
Development Of Large-scale Synthetic Population to Simulate COVID-19 Transmission and Response
Chaitanya Kaligotla (Argonne National Laboratory, The University of Chicago); Abby Stevens and Bogdan Mucenic (The University of Chicago); Jonathan Ozik and Nicholson T. Collier (Argonne National Laboratory, The University of Chicago); Kyoung Whan Choe (The University of Chicago); Sara P. Rimer (Argonne National Laboratory); Anna Hotton (The University of Chicago); and Charles M. Macal (Argonne National Laboratory, The University of Chicago)
Abstract Abstract
This research describes the development of city to multi-county scale synthetic populations for application to an agent-based model (CityCOVID) that simulates the endogenous transmission of COVID-19 and measures the impact of public health interventions.
pdf
Data Science for Simulation
Track Coordinator - Data Science for Simulation: Abdolreza Abhari (Ryerson University), Hamdi Kavak (George Mason University)
Data Science for Simulation
Data Science for Simulation I
Chair: Abdolreza Abhari (Ryerson University)
NIM: Modeling and Generation of Simulation Inputs via Generative Neural Networks
Best Contributed Theoretical Paper - Finalist
Wang Cen, Emily A. Herbert, and Peter J. Haas (University of Massachusetts Amherst)
Abstract Abstract
We introduce Neural Input Modeling (NIM), a generative-neural-network framework that exploits modern data-rich environments to automatically capture simulation input distributions and then generate samples from them. Experiments show that our prototype architecture NIM-VL, which uses a novel variational-autoencoder architecture with LSTM components, can accurately, and with no prior knowledge, automatically capture a range of complex stochastic processes and efficiently generate sample paths. Moreover, we show that the outputs from a queueing model with (known) complex inputs are statistically close to outputs from the same queueing model but with the inputs learned via NIM. Known distributional properties such as i.i.d. structure and nonnegativity can be exploited to increase accuracy and speed. NIM can thus help overcome one of the key barriers to simulation for non-experts.
pdf
Estimating Stochastic Poisson Intensities Using Deep Latent Models
Ruixin Wang, Prateek Jaiswal, and Harsha Honnappa (Purdue University)
Abstract Abstract
We present methodology for estimating the stochastic intensity of a doubly stochastic Poisson process. Statistical and theoretical analyses of traffic traces show that these processes are appropriate models of high intensity traffic arriving at an array of service systems. The statistical estimation of the underlying latent stochastic intensity process driving the traffic model involves a rather complicated nonlinear filtering problem. We develop a novel simulation methodology, using deep neural networks to approximate the path measures induced by the stochastic intensity process, for solving this nonlinear filtering problem. Our simulation studies demonstrate that the method is quite accurate on both in-sample estimation and on an out-of-sample performance prediction task for an infinite server queue.
pdf
Enhancing Input Parameter Estimation by Machine Learning for The Simulation of Large-Scale Logistics Networks
Yang Liu, Liang Yan, Sheng Liu, Ting Jiang, Feng Zhang, Yu Wang, and Shengnan Wu (JD Logistics)
Abstract Abstract
The quality of large-scale logistics network simulation highly depends on the estimation of its key input parameters, which are usually influenced by various factors that are difficult to obtain. To tackle this challenge, this paper proposes a framework to estimate these parameters with high precision through machine learning, in which the impacting factors are divided into static and dynamic groups and used as features to train a learning model for estimation. To overcome the obstacle that dynamic factors are hard to obtain in some scenarios, the proposed framework employs unsupervised learning to analyze their patterns and extract time-invariant features for modeling. A validation study is conducted on the estimation of distribution center sorting times. The results proved our approach can generate more accurate estimation of input parameters, even with the shift of operational plans and absence of relevant data.
pdf
Data Science for Simulation
Data Science for Simulation II
Chair: Abdolreza Abhari (Ryerson University)
Agent-based Modeling and Simulation on Residential Population Movement Patterns: The Case of Sejong City
Best Contributed Applied Paper - Finalist
Tae-Sub Yun (KAIST), Young-Chul Kim and Ki-Sung Jin (ETRI), and Il-Chul Moon (KAIST)
Abstract Abstract
An urban simulation is a useful tool for urban administration and policy experiments. Our research goal is composing an agent-based simulation that models the behavioral and movement patterns of the urban population with a real-world city, Sejong in South Korea. Particularly, we modeled the urban dynamics of the city with the size of the real population and with the real-world GIS data. We followed the statistical survey of the behavioral pattern of the population in accordance with a time-use survey data. Lastly, we constructed the public transportation based on bus lines and schedules. Our result shows the initial qualitative validation result of the urban population behavior, specifically on the utilization of the public transportation.
pdf
A Simheuristic Algorithm for Placing Services in Community Network
Javier Panadero, Laura Calvet, Christopher Bayliss, and Joan Manuel Marquès (Open University of Catalonia); Mennan Selimi (Max van der Stoel Institute); and Felix Freitag (Universitat Politècnica de Catalunya)
Abstract Abstract
The growing demand for network connectivity in both rural and urban areas have boosted the number of community networks (CNs). They are owned and managed at the edge by volunteers.
As a result, CNs tend to present characteristics such as irregular topology, heterogeneity of resources, and an unreliable behavior. These characteristics mean that CNs work under a high level of uncertainty. Hence, CNs claim for advanced simulation-optimization methods to place services. Despite their popularity, there is a lack of works discussing stochastic approaches. In this context, we propose a simheuristic algorithm to properly address this stochastic problem. Our approach combines Monte Carlo simulation and the multi criteria optimal placement heuristic. This approach is tested using real traces of Guifi.net, which is the largest CN worldwide. The results support the need of taking into account the stochasticity in service placement.
pdf
Learning Lindley's Recursion
Sergio David Palomo and Jamol Pender (Cornell University)
Abstract Abstract
Lindley's recursion is one of the foundational formula's in queueing theory and applied probability. In this paper, we leverage stochastic simulation and current machine learning methods to learn the Lindley recursion directly from waiting time data of the G/G/1 queue. To this end, we use methods such as Gaussian Processes, k-Nearest Neighbors and Deep neural networks to learn the Lindley recursion. We also analyze specific parameter regimes for the G/G/1 to understand where learning the Lindley recursion may be easy or hard. Finally, we compare the machine learning methods to see how well we can predict the Lindley recursion multiple steps into the future with missing data.
pdf
Environment and Sustainability Applications
Track Coordinator - Environment and Sustainability Applications: Elie Azar (Khalifa University), Seong-Hee Kim (Georgia Institute of Technology)
Environment and Sustainability Applications
Resources and Risk Management
Chair: Seong-Hee Kim (Georgia Institute of Technology)
System Integration with Multiscale Networks (SIMoN): A Modular Framework for Resource Management Models
Marisa J. Hughes, Michael Kelbaugh, Victoria Campbell, Elizabeth Reilly, Susama Agarwala, Miller Wilt, Andrew Badger, and Evan Fuller (The Johns Hopkins University Applied Physics Laboratory); Dillon Ponzo (Amazon Web Services); and Ximena Calderon Arevalo, Alex Fiallos, Lydia Fozo, and Jalen Jones (The Johns Hopkins University)
Abstract Abstract
Although the scientific community has proposed numerous models of Earth and human systems, there are few tools available that support the model coupling that is necessary to capture their complex interrelationships and promote further research cooperation. To address this challenge, we propose System Integration with Multiscale Networks (SIMoN), an open source modeling framework with a novel methodology for supporting heterogeneous geospatial regions. SIMoN enables users to define consistent aggregation and disaggregation maps for transformation between disparate notions of geospatial units such as counties, watersheds, and power regions. We have applied this unique tool to couple models across domains including as climate, population, and food-energy-water (FEW) systems.
pdf
A Simulation-based Decision-support System for Reducing Duration, Cost, and Environmental Impacts of Earthmoving Operations
Elyar Pourrahimian, Malak Al Hattab, Rana Ead, Ramzi Roy Labban, and Simaan AbouRizk (University of Alberta)
Abstract Abstract
Earthmoving operations are equipment-intensive processes that rely heavily on the proper selection of the equipment fleet and proper scheduling of associated tasks. Early equipment planning decisions have direct implications on schedules, costs, and more importantly, the environmental performance of such operations. While traditional planning of earthmoving works is ad-hoc and based on planners’ experiences, ensuring favorable performance requires advanced analytical techniques that consider multiple variables and competing objectives. Accordingly, this study develops a discrete-event simulation-based decision-support system (DES-DSS) for selecting the optimal equipment fleet, while considering the trade-offs between time, cost, and environmental impacts. The model’s results from a case study reveal how different fleet mixes and sizes can considerably impact associated emissions, durations, and costs. The DES-DSS can aid planners in making informed decisions during early planning stages and be used as a control feedback mechanism to continuously enhance operations in real-time while reducing emissions.
pdf
A Lattice Boltzmann Advection Diffusion Model For Ocean Oil Spill Surface Transport Prediction
Zhanyang Zhang, Tobias Schäfer, and Michael E. Kress (City University of New York, College of Staten Island)
Abstract Abstract
The focus of our study is to investigate the feasibility and effectiveness of using Lattice Boltzmann Advection Diffusion Equation (LBM-ADE) to model and simulate ocean oil spill transport at the surface level. We present some preliminary results from a prototype model and simulation in limited scale (a sub area of Gulf of Mexico) with assimilation of real ocean current data from the Unified Wave Interface-Coupled Model (UWIN-CM). We validate our model in a benchmark study against GNOME, a tool developed and used by NOAA for ocean oil spill forecast, under two scenarios: (i) a Gaussian hill concentration using a linear ocean current with the analytical solution as a reference; (ii) a Gaussian hill concentration using real ocean current from (UWIN-CM). Our benchmark results in both cases show the LBM-ADE model solutions are very close to the targeted analytical and GNOME solutions with the same initial oil spill and location.
pdf
Environment and Sustainability Applications
Energy, Food, and Water
Chair: Neda Mohammadi (Georgia Institute of Technology)
Cell-devs Models for Co2 Sensors Locations in Closed Spaces
Hoda Khalil, Gabriel Wainer, and Zachary Dunnigan (Carleton University)
Abstract Abstract
With the global warming crisis and its correlation to levels of energy consumption, it is paramount to find ways to reduce energy consumption in closed spaces with minimal disruption to occupants’ comfort. Thus, researchers are working to improve methodologies for occupant-based demand-control heating, ventilation, and air conditioning. Sensor usage for occupancy detection is among the methodologies researched for controlling consumption. Carbon dioxide sensors proved to be effective but overly sensitive to configuration. Research also proved that there is an undetermined latency period between the changes of the number of occupants and the carbon dioxide sensors detection of that change. We present a work in progress method to determine the best placement of carbon dioxide sensors for the accurate occupants’ detection and calculation of latency using the Cellular Discrete-Event-Specifications formalism. We present several case studies showing resemblance between physical closed spaces and the models and how the simulation replicates real-life scenarios.
pdf
Multi-threaded Simulation Optimization Platform for Reducing Energy Use in Large-Scale Water Distribution Networks with High Dimensions
David D. Linz, Erin N. Musabandesu, Behzad Ahmadi, Robert T. Good, and Frank J. Loge (University of California Davis, Center for Water-Energy Efficiency)
Abstract Abstract
Hydraulic simulation models are used to improve the energy use of pumps at water distribution networks through simulation optimizations by selecting operating policies which reduce energy usage while meeting customer water demand. Typical simulated optimizations of complex hydraulic systems have high dimensional decision spaces and require significant time to evaluate. This study presents the design of a new multi-threaded simulation optimization software platform to determine pump operations for water distribution networks. The platform explores rule-based controls for pumps using derivative-free simulation optimization methods as independent, parallelized computational tasks. Decision spaces are reduced through domain division which produces smaller subproblems to be sequentially optimized. The platform is applied to a real urban water distribution system case study to determine energy efficient pump operating policies. The performance of several optimization techniques are compared, indicating that domain division approaches may improve consistency of optimization but are not necessarily beneficial for all optimization techniques.
pdf
Modeling the Relationship Between Food and Civil Conflict
Elizabeth Reilly, Susama Agarwala, Michael T. Kelbaugh, Agata Ciesielski, Hani-James M. Ebeid, and Marisa Hughes (JHU/APL)
Abstract Abstract
We built a system of systems model to better understand the relationship between the agricultural sector, other economic factors, and changes in expected value of conflict. Our model integrates multiple factors, including food production, food trade, population, and civil conflict, and determines their interdependencies based on shared inputs or outputs. We find that severe food price shocks, precipitated by multiple breadbasket failures, can severely impact a country’s GDP and its ability to purchase and consume a sufficient amount of food, resulting in an increase in civil conflict and related casualties. A sharp population increase, as potentially caused by an immigration surge, was found to have a similar impact, though not as strong.
pdf
Environment and Sustainability Applications
Environmental and Social Systems
Chair: Barry Lawson (University of Richmond)
Human-Infrastructure Interactional Dynamics: Simulating COVID-19 Pandemic Regime Shifts
Neda Mohammadi and John Taylor (Georgia Institute of Technology)
Abstract Abstract
When subject to disruptive events, the dynamics of human-infrastructure interactions can absorb, adapt, or, in a more abrupt manner, undergo substantial change. These changes are commonly studied when a disruptive event perturbs the physical infrastructure. Infrastructure breakdown is, thus, an indicator of the tipping point, and possible regime shift, in the human-infrastructure interactions. However, determining the likelihood of a regime shift during a global pandemic, where no infrastructure breakdown occurs, is unclear. In this study we explore the dynamics of human-infrastructure interactions during the global COVID-19 pandemic for the entire United States and determine the likelihood of regime shifts in human interactions with six different categories of infrastructure. Our results highlight the impact of state-level characteristics, executive decisions, as well as the extent of impact by the pandemic as predictors of either undergoing or surviving regime shifts in human-infrastructure interactions.
pdf
Simulation of Aerial Supression Tasks in Wildfire Events Integrated with GisFIRE Simulator
Jaume Figueras i Jové, Antoni Guasch i Petit, and Josep Casanovas-García (Universitat Politècnica de Catalunya)
Abstract Abstract
Wildfire simulation tools focus on how fire spreads in the natural environment. Simulation of fire containment operations can provide managers with a tool that combine wildfire evolution with suppression operations. Combined simulation tools are useful to evaluate different strategies and tactics in firefighting wildfires. This paper presents the modelling and simulation of aerial containment operations integrated into a wildfire spread simulator. A continuous space GisFIRE wildfire spread simulator has been used for fire spread simulation that integrate the aerial operations; and QGIS tool to integrate both simulation tools with geographical information such as air facilities locations, usable bodies of water, and other relevant geo-information. Open source software is a requirement to allow integration of different software packages and usage of OGC standards to represent geographical information
pdf
Track Coordinator - Healthcare Applications: Christine Currie (University of Southampton), Masoud Fakhimi (University of Surrey), Maria Mayorga (North Carolina State University)
Healthcare Applications
Simulation Models for Resource Planning during COVID-19
Chair: Vishnunarayan Girishan Prabhu (Clemson University)
Team Based, Risk Adjusted Staffing During a Pandemic: An Agent Based Approach
Vishnunarayan Girishan Prabhu and Kevin Taaffe (Clemson University); William Hand (Prisma Health-Upstate, USC); and Caglar Caglayan, Tugce Isik, and Yongjia Song (Clemson University)
Abstract Abstract
Since the World Health Organization declared the novel coronavirus disease a pandemic, more than 2 million cases of infections and 140,000 deaths have been reported across the world. Specialty physicians are now working as frontline workers due to hospital overcrowding and a lack of providers, and this places them as a high-risk target of the epidemic. Within these specialties, anesthesiologists are one of the most vulnerable groups as they come in close contact with the patient's airway. An agent-based simulation model was developed to test various staffing policies within the anesthesiology department of the largest healthcare provider in Upstate South Carolina. We demonstrate the benefits of a restricted, no mixing shift policy, which segregates the anesthesiologists as groups and assigns them to a shift within a single hospital. Results consistently show a reduction in both the number of anesthesiologists not available to work and the number of infected anesthesiologists.
pdf
Planning Ward And Intensive Care Unit Beds For COVID-19 Patients Using A Discrete Event Simulation Model
Best Contributed Applied Paper - Finalist
Daniel Garcia-Vicuña and Fermin Mallor (Public University of Navarre) and Laida Esparza (Navarre Hospital Compound)
Abstract Abstract
This paper reports the construction of a simulation model used to support the decision-making concerned with the short-term planning of the necessary hospital beds to face the COVID-19 in Navarre (Spain). The simulation model focuses on estimating the health system’s transitory state. It reproduces the outbreak dynamics by using the Gompertz growth model and the patient flow through the hospital, including the possible admission in the Intensive Care Unit (ICU). The output estimates the number of the necessary ward and ICU beds to provide healthcare to all patients for the next days. The simulation model uses expert opinions at the first stages of the outbreak, but as more data are collected the necessary parameters are fitted by statistical analysis or combining both. Every day, the research team informed the regional logistic team in charge of planning the health resources. Based on these predictions the authorities plan the necessary resources.
pdf
Impact Of Covid-19 Epidemics On Bed Requirements In A Healthcare Center Using Data-Driven Discrete-Event Simulation
Jules Le Lay (Mines Saint-Etienne); Edgar Alfonso-Lizarazo (University of Lyon, University Jean Monnet); Vincent Augusto (Mines Saint-Etienne); Bienvenu Bongue (Université Jean Monnet, Centre Technique d’Appui et de Formation des Centres d’examens de Santé (CETAF)); Thomas Celarier (University hospital, University Jacques Lisfranc; Hopital de la Charité); Regis Gonthier (University hospital, University Jacques Lisfranc); Malek Masmoudi (University of Lyon, University Jean Monnet); and Xiaolan Xie (Mines Saint-Etienne)
Abstract Abstract
Bed occupancy ratio reflects the state of the hospital at a given time. It is important for management to keep track of this figure to proactively avoid overcrowding and maintain a high level of quality of care. The objective of this work consists in proposing a decision-aid tool for hospital managers allowing to decide on the bed requirements for a given hospital or network of hospitals on a short-medium term horizon. To that extend we propose a new data-driven discrete-event simulation model based on data from a French university hospital to predict bed and staff requirements. We propose a case study to illustrate the tool’s ability to monitor bed occupancy in the recovery unit given the admission rate of ED patients during the pandemic of Sars-Cov-2. These results give an interesting insight on the situation, providing decision makers with a powerful tool to establish an enlightened response to this situation.
pdf
Healthcare Applications
Decision Tools in Healthcare Settings Using Simulation Modeling
Chair: Kurtis Konrad (North Carolina State University)
Real-Time Nurse Dispatching Using Dynamic Priority Decision Framework
Canan Gunes Corlu, John Maleyeff, Jiaxun Wang, and Kaming Yip (Boston University, Metropolitan College) and John Farris (Grand Valley State University, School of Engineering)
Abstract Abstract
The increase in medical treatment complexity can cause experienced nurses to have difficulty determining priorities among patient needs. Electronic health record systems will enable automated decision support to assist medical professionals in making these determinations. This article details a framework that uses a discrete-event simulation, programmed in Python, to determine how priorities should be assigned in real time based on characteristics of patient needs. The severity of patient needs is dynamic because severity increases over time until the need is addressed. The simulation framework is applied to a cardiac care unit with 14 patients, who collectively have 125 needs. Four different priority schemes are evaluated and
their effectiveness compared under the assumption of an 8 or 9 nurse capacity. The results illustrate the importance of modeling the dispatching of nurses according to severity because, although fewer nurses result in longer average queue times, they can handle higher-severity needs effectively.
pdf
A Simulation Model for the Multi-Period Kidney Exchange Incentivization Problem
Kurtis Konrad (North Carolina State University)
Abstract Abstract
Kidney exchanges provide an opportunity for individuals who need a new kidney to effectively trade a donor’s incompatible kidney for a compatible one. We present a mechanism for fully dynamic kidney exchanges that incentivizes transplant centers to truthfully participate in global matchings through a credit-based weighting scheme. Our mechanism incorporates both cycles and altruistically initiated chains while allowing patients to remain in the system for multiple time periods. Using simulation, we demonstrate that this credit-based matching mechanism is strategy proof, individually rational, and efficient for all transplant centers under the assumption that all offered matches are accepted.
pdf
Simulation Modeling as a Decision Tool for Capacity Allocation in Breast Surgery
Derya Kilinc and Narges Shahraki (Mayo Clinic); Esma Gel (Arizona State University); and Amy Degnim, Tanya Hoskin, Tiffany Horton, Mustafa Sir, and Kalyan Pasupathy (Mayo Clinic)
Abstract Abstract
Increased surgeon workload can result in prolonged access times for patients and may lead to surgeon burnout. Management of access times through investments in care capacity and hiring of providers require an understanding of the patient access times resulting from a given level of care capacity under different patient demand scenarios. We explore the effectiveness of a simulation-based framework in providing workforce planning insights. Our framework involves modeling of patient demand by considering different groups of surgical procedures, a simulation model that allows calibration of certain parameters through the use of data, and consideration of different demand and capacity scenarios to provide an understanding of the range of patient access times that can be expected over the immediate future during the time horizon. Our results show that such a simulation-based framework can help ground workforce planning and capacity investment decisions on operational data, and help healthcare institutions manage such costs.
pdf
Healthcare Applications
Discrete-Event Simulation Models to Inform Healthcare Decisions
Chair: Xiaoquan Gao (Purdue University)
Primary Healthcare Delivery Network Simulation Using Stochastic Metamodels
Najiya Fatma and Shoaib Mohd (Indian Institute of Technology Delhi), Navonil Mustafee (University of Exeter), and Varun Ramamohan (Indian Institute of Technology Delhi)
Abstract Abstract
A discrete-event simulation (DES) of the network of primary health centers (PHCs) in a region can be used to evaluate the effect of changes in patient flow on operational outcomes across the network, and can also form the base simulation to which simulations of secondary and tertiary care facilities can be added. We present a DES of a network of PHCs using stochastic metamodels developed from more detailed DES models of PHCs (‘parent’ simulations), which were developed separately, for comprehensively analyzing individual PHC operations. The stochastic metamodels are DESs in their own right. They are simplified versions of the parent simulation with full-featured representations of only those components relevant to the analysis at hand. We show that the outputs of interest from the metamodels and the parent simulations (including the network simulations) are statistically similar and that our metamodel-based network simulation yields reductions of up to 80% in runtimes.
pdf
Dynamic Optimization of Drone Dispatch for Substance Overdose Rescue
Xiaoquan Gao and Nan Kong (Purdue University) and Paul M. Griffin (Pennsylvania State University)
Abstract Abstract
Opioid overdose rescue is very time-sensitive. Hence, drone-delivered naloxone has the potential to be a transformative innovation due to its easily deployable and flexible nature. We formulate a Markov Decision Process (MDP) model to dispatch the appropriate drone after an overdose request arrives and to relocate the drone to its next waiting location after having completed its current task. Since the underlying optimization problem is subject to the curse of dimensionality, we solve it using ad-hoc state aggregation and evaluate it through a simulation with higher granularity. Our simulation-based comparative study is based on emergency medical service data from the state of Indiana. We compare the optimal policy resulting from the scaled-down MDP model with a myopic policy as the baseline. We consider the impact of drone type and service area type on outcomes, which offers insights into the performance of the MDP suboptimal policy under various settings.
pdf
Using Simulation to Evaluate Operational Trade-offs Associated with the Use of Care Teams in Specialty Care Settings
Krupali Patel (University of Minnesota), Vahab Vahdat (Harvard Medical School), and Bjorn Berg (University of Minnesota)
Abstract Abstract
New approaches for designing delivery of care in specialty outpatient clinics are emerging as a result of the use of care-teams and shared resources. However, questions remain surrounding how shared resources, e.g., exam rooms or support staff, should be allocated to balance the competing performance objectives in specialty care settings. We develop a discrete-event simulation model to evaluate resource allocation policies in an outpatient specialty clinic. Different resource allocation policies, ranging from fully dedicated to partially flexible to fully flexible for multiple resource types, are evaluated based on multiple performance measures. We find that a proposed policy based on strategic capping of flexibility achieves desired access to resources from a provider’s perspective while also maintaining high utilization rates.
pdf
Healthcare Applications
Operational Planning for Critical Patients
Chair: Jens Brunner (University of Augsburg, Faculty of Business and Economics)
Emergency Imaging after a Mass Casualty Incident: An Operational Perspective via a Simulation Study
Best Contributed Applied Paper - Finalist
Joseph Sismondo (Univeristy of Toronto), Vahid Sarhangian (University of Toronto), Emmett Borg (Hoffmann-La Roche), Eric Roberge (Madigan Army Medical Center), and Ferco H. Berger (Sunnybrook Health Sciences Centre)
Abstract Abstract
In the aftermath of a Mass Casualty Incident (MCI) many patients require lifesaving treatments and surgeries. Due to the sudden surge in demand, the resources of the hospital are overwhelmed, making proper planning and use of the available resources crucial in minimizing mortality and morbidity. To help with planning, patients are triaged into four levels based on the clinical assessment of the criticality of their conditions. The triage decisions are however subject to error and a patient may be under or overtriaged. Mistriages can be identified by performing imaging, e.g., a Computed Tomography (CT) scan, but imaging also takes non-negligible time and has limited capacity. We propose a queueing network model of patient flow during an MCI and use simulation experiments to quantify the value of identifying mistriaged patients. Our results demonstrate the value of performing imaging, but also point out to the importance of accounting for its limited capacity.
pdf
Simulation and Evaluation of ICU Management Policies
Jie Bai (University of Augsburg, Faculty of Business and Economics); Steven Gerstmeyr (University of Augsburg); and Jens O. Brunner (University of Augsburg, Faculty of Business and Economics)
Abstract Abstract
The intensive care unit is one of the bottleneck resources in the hospital, due to the fact that the demand grows much faster than the capacity. The pressure on intensive care unit managers to use resources efficiently and effectively increases and therefore optimal management policies are required. In this work, we evaluate eleven commonly referred policies from the literature and compare their performance by nine key performance indicators in different perspectives, such as utilization, patient health status and profit of the hospital. The thirty most frequently occurring patient paths, based on the practical dataset of more than 75k patient records from a large German teaching hospital, are simulated. According to our results, increasing the capacity and treating the patients in well-equipped intermediate care units are better in the medical perspective, while the early discharge policy performs well when the capacity is limited. Furthermore, a scenario of handling COVID-19 is studied.
pdf
Incorporating Patient Deterioration When Simulating Utilization of a Cardiovascular Intensive Care Unit
Ziqi Wang, Ambika Agrawal, Imani Carson, Luke Liu, Harini Pennathur, Hadi Saab, Amy Mainville Cohn, Amanda Moreno-Hernandez, and Hitinder Gurm (University of Michigan)
Abstract Abstract
Patients undergoing many forms of cardiovascular surgery typically enter the cardiac intensive care unit (CICU) after surgery, transfer to a step down (SDn) unit, and then are ultimately either discharged or bounce back to the CICU because of deterioration. Randomness and unpredictability exist in these processes, especially the bounce back process. Underestimation of bounce back rates will result in lack of bed capacity for patients and patient deferrals. Adding beds, however, will lead to an increase in cost. Therefore, it requires careful consideration of the trade-off between decreasing patients denied versus potential system costs when bounce back is introduced to the system. We present a discrete-event simulation model to assess how bounce back will impact assessment of bed capacity in the CICU and SDn and other major metrics of the system. We present analyses utilizing data from our collaborators at the Samuel and Jean Frankel Cardiovascular Center in Michigan Medicine.
pdf
Healthcare Applications
Discrete-Event Simulation Modeling to Address Operational Questions in Healthcare
Chair: Mandana Rezaeiahari (University of Arkansas for Medical Sciences)
Discrete-Event Simulation with Consideration for Patient Preference When Scheduling Specialty Telehealth Appointments
Adam VanDeusen, Nicholas Zacharek, Emmett Springer, Advaidh Venkat, Amy Cohn, Megan Adams, Jacob Kurlander, and Sameer Saini (University of Michigan)
Abstract Abstract
Healthcare providers have begun providing care to patients via remote appointments using web-based, synchronous video visits. As this appointment modality becomes increasingly prevalent, decision-makers must consider how to incorporate patient preference for an in-person versus virtual care modality when scheduling future visits. We present a discrete-event simulation that models several potential policies that these decision-makers could use to schedule patients, and demonstrate this simulation in the clinical context of patients with gastroesophageal reflux disease. This simulation provides key metrics for decision-makers, including provider utilization, patient lead time, and proportion of appointments that satisfy patients’ preferences for appointment modality.
pdf
Evaluating Staffing Levels in Milk Lab Using Discrete Event Simulation
Mandana Rezaeiahari and Antonije Lazic (UAMS)
Abstract Abstract
This study uses discrete event simulation (DES) to evaluate various staffing levels at a milk lab in an academic medical center. The studied milk lab operates 10 hours a day and 7 days a week and processes on average 45 orders from a 64-bed Neonatal Intensive Care Unit (NICU). We categorized the orders into simple and complex orders. The ranges for percentages of simple and complex orders were obtained by reviewing the historical data of orders at the milk lab. A DES model was then built to evaluate the number of required technicians at the milk lab by varying the number of orders as well as the percentage of complex orders. The performance measures studied in our simulation model were the makespan and utilization of the milk lab technicians. Based on the results, three technicians is a reasonable level of staffing conditional on timely start of the orders.
pdf
A Simulation Study of Outpatient Surgery Clinic with Stochastic Patient Re-Entrance
Haolin Feng (Sun Yat-sen University), Michelle McGaha Alvarado (University of Florida), Zitian Li (Sun Yat-sen University), and Coralys M. Colon-Morales (University of Florida)
Abstract Abstract
This study investigates how the variability of different stochastic elements affects the performance of operations at a Mohs Micrographic Surgery (MMS) clinic. MMS is a popular procedure to treat non-melanoma skin cancers. In MMS, the surgeon performs skin layer excisions on the patient one at a time, and the removed layer is then examined. If cancerous cells remain during examination, another excision will be conducted; otherwise the patient goes through wound repair before being discharged. Such repetitive excisions of thin layers lead to low re-occurrence rates and impressive post-surgery cosmetic results, but it requires uncertain amount of same-day surgeries which may lead to long patient waiting times and clinic overtime. We develop a simulation model to study the operational performance of an MMS clinic with a given appointment schedule used in practice. Our study reveals how the waiting time and clinic overtime is affected by different stochastic factors.
pdf
Healthcare Applications
Simulation Models for Evaluating Patient Flow in Different Care Settings
Chair: Breanna P. Swan (North Carolina State University)
A Simulation Model to Evaluate the Patient Flow in an Intensive Care Unit Under Different Levels of Specializiation
Andres Alban and Stephen E. Chick (INSEAD) and Oleksandra Lvova and Danielle Sent (Amsterdam UMC, University of Amsterdam; Amsterdam Public Health Research Institute)
Abstract Abstract
Intensive care units are important departments in hospitals that are complicated in that they have both urgent and elective patients with a variety of specialty needs. Their design involves important operations strategy decisions, such as whether there are several specialized units for certain patient needs or general facilities, or something in between. They also involve bed capacity decisions for the aggregate and potentially specialized units. This paper presents a simulation model which is used to assess trade-offs in these operational design issues with respect to three performance measures (rejection rate, rescheduling rate, and bed occupancy rate), using data and design options for Amsterdam UMC, location AMC (AMC).
pdf
Integrated Simulation Tool to Analyze Patient Access to and Flow During Colonoscopy Appointments
Jake Martin and Pushpendra Singh (University of Michigan), Jakob Kiel-Locey (Michigan Medicine), Karmel Shehadeh (Carnegie Mellon University), Amy Cohn (University of Michigan), and Sameer Saini and Jacob Kurlander (Michigan Medicine)
Abstract Abstract
Colonoscopy procedures are key to reducing colorectal cancer incidence and outcomes. For this reason, it is important that clinics be designed to maximize access to care and to use clinic time effectively. This paper presents a simulation tool that analyzes different scheduling policies to see how they have an impact on overall clinic operations. By simultaneously simulating both scheduling and operations, the tool can account for more variability and better predict actual outcomes. This tool can be used to inform clinics on what scheduling policies work best for their clinic and help analyze what the tradeoffs will be between different policies.
pdf
Evaluating Diabetic Retinopathy Screening Interventions in a Microsimulation Model
Breanna Swan, Siddhartha Nambiar, Priscille Koutouan, Maria E. Mayorga, and Julie Ivy (North Carolina State University) and Stephen Fransen (University of Oklahoma, Retinal Care Inc)
Abstract Abstract
Diabetic retinopathy (DR) is the leading cause of blindness for working age Americans. Early detection, timely treatment, and appropriate follow-up care reduce the risk of severe vision loss from DR by 95%, yet, less than 50% of people with diabetes adhere to the recommended screening guidelines. Diabetes is a complicated disease for patients and their physicians to manage. We developed a microsimulation integrating the natural history model of DR with a patient’s interaction with the care system. We introduced a DR screening device in primary care, with and without care coordination by a medical professional, in two interventions to the current care path. We found the interventions increased adherence of patients with vision-threatening DR (VTDR) to follow-up eye care, decreased the number of ‘unnecessary’ visits in specialty eye care from patients without VTDR, and decreased the total years spent blind.
pdf
Healthcare Applications
Combining Methodologies with Discrete-Event Simulation for Healthcare Applications
Chair: Parastu Kasaie (Johns Hopkins University)
A Combined Simulation and Machine Learning Approach for Real-time Delay Prediction for Waitlisted Neurosurgery Candidates
Vaibhav Baldwa and Siddharth Sehgal (Indian Institute of Technology Delhi), Vivek Tandon (All India Institute of Medical Sciences New Delhi), and Varun Ramamohan (Indian Institute of Technology Delhi)
Abstract Abstract
In this study, we present a method to predict whether a patient seeking admission to the neurosurgery ward of a large public tertiary care hospital in north India receives admission within a prespecified duration. The prediction needs to be made at the time the patient presents at the ward seeking admission, so that they can then decide whether to wait for admission into the neurosurgery ward or seek care elsewhere. We accomplish this by simulating the admission and patient stay processes at the neurosurgery ward, and use the simulation to generate data to train machine learning algorithms to predict whether the patient is admitted as a function of the state of the simulation at the time the patient presents at the ward seeking admission. With ensemble tree classifiers, we achieve generalization area under the curve scores of 95% for all patients taken together and between 80-95% depending upon patient subtype.
pdf
A Generic Framework to Analyze and Improve Patient Pathways Within a Healthcare Network Using Process Mining and Discrete-event Simulation
Thomas Franck (Groupe Hospitalier Bretagne Sud); Vincent Augusto (Mines Saint-Etienne, Mines Saint-Étienne); and Paolo Bercelli and Saber Aloui (Groupe Hospitalier Bretagne Sud)
Abstract Abstract
Congestion in the Emergency Department (ED) is one of the most important issues in healthcare systems. Lack of downstream bed can deteriorate the quality of care for patients who need hospitalization after ED. We propose a generic simulation model in order to analyze patient pathways from the ED to hospital discharge. The model is adaptable for all pathologies and can include several hospitals within a healthcare network. A pathway analysis using Process Mining is done with a medical staff in order to identify relevant pathways. Then we propose several designs of experiments in order to test medical unit capacity variations taking into account real data and practioners expertise. A practical case study on stroke patient pathway in the Southern Brittany Hospital is proposed to illustrate the approach. Results show that the best way to improve the number of optimal pathways is to increase the capacity of Rehabilitative Care units.
pdf
Assessing the Impact of Targeted Screening and Treatment of Diabetes and Hypertension among Adults Living with HIV in Nairobi, Kenya
Melissa Schnure, Parastu Kasaie, David Dowdy, Brian Weir, Chen Dun, and Chris Beyrer (Johns Hopkins University)
Abstract Abstract
Individuals living with human immunodeficiency virus (HIV) today are living longer, thanks to expanded access to antiretroviral therapy (ART); however, this population is therefore increasingly at risk for many age-associated comorbidities. The future health of people living with HIV will therefore depend on the prevention and management of non-communicable diseases (NCDs), with consideration for integrated approaches to screening and treatment becoming increasingly important. This analysis applies a hybrid simulation of HIV and NCDs to examine the impact of providing screening and treatment for hypertension and diabetes at HIV facilities in Nairobi, Kenya. We combine a compartmental model of the HIV epidemic at a population level with a microsimulation of cardiovascular disease (CVD), and explore the impact of various strategies for targeting eligible individuals on ART, by age and gender, to receive NCD screening and treatment.
pdf
Track Coordinator - Hybrid Simulation: David Bell (Brunel University London), Antuela Tako (Loughborough University)
Hybrid Simulation
Hybrid Simulation in Practice
Chair: Antuela Tako (Loughborough University)
The Benefits of a Hybrid Simulation Hub to Deal with Pandemics
Jennifer I. Lather (University of Nebraska - Lincoln) and Tillal Eldabi (University of Surrey)
Abstract Abstract
The advent of COVID-19 has shaken the whole world to its core. With many decision makers at all levels trying to tackle the spread of the disease and the economic ramification, the modeling and simulation is of paramount importance as part of this effort. Given the intricacy and interconnectedness of the problem, hybrid simulation (HS) seems to provide better support for modelers given its ability to connect multiple decision categories. However, HS models are known to take longer to build while requiring multiple expertise, which does not match the rapid impacts of COVID-19. To allow for faster means for developing such models, in this paper, we call for the establishment of a hub for rapid HS model development through global collaboration of simulation modelers. In the lack of such hubs, we demonstrate how a HS model could be built using publicly available single-method models.
pdf
A review of Hybrid Simulation in Healthcare
Vivianne Horsti dos Santos, Kathy Kotiadis, and Maria Paola Scaparra (University of Kent)
Abstract Abstract
Hybrid Simulation (HS) has been applied to healthcare systems, but there is still limited literature and an opportunity to develop research. This review explores applications of HS in healthcare, to outline research gaps and foster new research in HS to solve complex real healthcare problems. The twelve application papers found through a systematic literature search covered nearly all hybrid combinations. Discrete Event (DES) and System Dynamics (SD) were found to be the most popular combination, and AnyLogic, the most used HS tool. We found that none of the papers we reviewed used the SD-ABS approach, which raises questions about the need and challenges associated with certain combinations. HS in healthcare applications, for the most part, are published in conference proceedings. We discuss opportunities for research and, in particular, the potential for HS application in problems related to communicable disease and healthcare services planning.
pdf
A Human Experiment using a Hybrid Agent-Based Model
Andrew J. Collins, Sheida Etemadidavan, and Pilar Pazos-Lago (Old Dominion University)
Abstract Abstract
Agent-based modeling (ABM) provides a means to investigate the emergent phenomenon generated from interacting autonomous agents. However, there are some concerns with this modeling approach. One concern is how to integrate strategic group formation, into ABM, without imposing macro-level aggregation rules. Another concern is whether the computerized agents’ behavior is reflective of actual human behavior. Collins and Frydenlund developed a hybrid modeling approach to address the first concern in 2018. The focus of this paper is on the second concern, in the context of the hybrid model. An experiment was conducted to help determine whether a person’s experiences affect their behavior and whether their behavior is similar to those generated by the hybrid model. The experimental results confirmed these two assertions. Our experiment used a standard cooperative game-theoretic game, called the glove game, as its base scenario.
pdf
Hybrid Simulation
Hybrid Simulations of Dynamic Systems
Chair: Antuela Tako (Loughborough University)
Virtual Hardware in the Loop: Hybrid Simulation of Dynamic Systems with a Virtualization Platform
Jan Reitz, Alexander Gugenheimer, and Jürgen Roßmann (RWTH Aachen University)
Abstract Abstract
This paper demonstrates the feasibility of co-simulation with virtual hardware as an alternative to Hardware in the Loop (HiL) simulation. Without real-time constraints, Virtual Hardware in the Loop (VHiL) simulation can be performed on general purpose hardware and software.
Specific challenges to this kind of simulation, like data exchange and synchronization, are addressed. An exemplary implementation coupling the virtualization platform QEMU and the simulation framework VEROSIM is presented and used in a case study simulating a self balancing robot controlled by a virtual ATmega328p.
The virtual hardware's properties are systematically varied. Results show that the effects of peripheral resolution and the processor's clock speed on the overall system behavior can be observed in a VHiL simulation.
It is concluded that VHiL can be a viable alternative to HiL simulation, but the applicability of this approach depends on the availability and accuracy of emulators for specific target platforms.
pdf
Rollback Support in HyFlow Modular Models
Fernando Barros (University of Coimbra)
Abstract Abstract
In this paper we develop an extension of the HyFlow (Hybrid Flow System Specification) formalism, for providing rollback support to hybrid modular models. HyFlow\extension preserves formalism modularity enabling the co-simulation of hierarchical models. Rollback plays an important role in some numerical methods. Examples include predictor corrector integrators (PCIs), and zero detectors (ZDs). These methods require simulation time to move both forward and backward in order to compute better estimates of the numerical solutions. We present formalism semantics and a description of PCI and ZD methods.
pdf
Unified DEVS-based platform for Modeling and Simulation of Hybrid Control Systems
Ezequiel Pecker Marcosig (Departamento de Electrónica, FI-UBA / Instituto de Ciencias de la Computación (ICC-CONICET)); Sebastian Zudaire (División de Fı́sica Estadı́stica e Interdisciplinaria, Instituto Balserio, UNCuyo); Martin Garrett (División de Vibración CNEA, Centro Atómico Bariloche); Sebastian Uchitel (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET); Computing Department, Imperial College London); and Rodrigo Castro (Departamento de Computación, FCEyN-UBA / Instituto de Ciencias de la Computación (ICC-CONICET))
Abstract Abstract
Recent robotic research has led to different architectural approaches that support enactment of automatically synthesized discrete event controllers from user specifications over low-level continuous variable controllers. Simulation of these hybrid control approaches to robotics can be a useful validation tool for robot users and architecture designers, but presents the key challenge of working with discrete and continuous representations of the robot, its environment and its mission plans. In this work we address this challenge showcasing a unified DEVS-based hybrid simulation platform. We model and simulate the hybrid robotic software architecture of a fixed-wing UAV, including the full stack of controllers involved: discrete, hybrid and continuous. We validate the approach experimentally on a typical UAV mapping mission and show that with our unified approach we are able to achieve simulation speed-ups up to one order of magnitude above our previous Software In The Loop simulation setup.
pdf
Hybrid Simulation
Hybrid Simulation and Reinforcement Learning
Chair: Antuela Tako (Loughborough University)
Deep Reinforcement Learning in Linear Discrete Action Spaces
Wouter van Heeswijk (Centrum Wiskunde & Informatica, University of Twente) and Han La Poutré (Centrum Wiskunde & Informatica, Technical University of Delft)
Abstract Abstract
Problems in operations research are typically combinatorial and high-dimensional. To a degree, linear programs may efficiently solve such large decision problems. For stochastic multi-period problems, decomposition into a sequence of one-stage decisions with approximated downstream effects is often necessary, e.g., by deploying reinforcement learning to obtain value function approximations (VFAs). When embedding such VFAs into one-stage linear programs, VFA design is restricted by linearity. This paper presents an integrated simulation approach for such complex optimization problems, developing a deep reinforcement learning algorithm that combines linear programming and neural network VFAs. Our proposed method embeds neural network VFAs into one-stage linear decision problems, combining the nonlinear expressive power of neural networks with the efficiency of solving linear programs. As a proof of concept, we perform numerical experiments on a transportation problem. The neural network VFAs consistently outperform polynomial VFAs as well as other benchmarks, with limited design and tuning effort.
pdf
A Method For Predicting High-Resolution Time Series Using Sequence-to-Sequence Models
Benjamin Woerrlein and Steffen Strassburger (Ilmenau University of Technology)
Abstract Abstract
With the increasing availability of data, the desire to interpret that data and use it for behavioral predictions arises. Traditionally, simulation has used data about the real system for input data analysis or within data-driven model generation. Automatically extracting behavioral descriptions from the data and representing it in a simulation model is a challenge of these approaches. Machine learning on the other hand has proven successful to extract knowledge from large data sets and transform it into more useful representations. Combining simulation approaches with methods from machine learning seems therefore promising to combine the strengths of both approaches. Representing some aspects of a real system by a traditional simulation model and others by a model incorporating machine learning, a hybrid system model (HSM) is generated. This paper suggests a specific HSM incorporating a deep learning method for predicting the anticipated power usage of machining jobs.
pdf
Track Coordinator - Introductory Tutorials: Anastasia Anagnostou (Brunel University London), Giulia Pedrielli (Arizona State University)
Introductory Tutorials
The Basics of Simulation
Chair: Nura Tijjani Abubakar (Brunel University London; Jigawa State Institute of IT, Kazaure)
K. Preston White (University of Virginia) and Ricki G. Ingalls (Retirement Clearinghouse LLC)
Abstract Abstract
Simulation is experimentation with a model. The behavior of the model imitates some salient aspect of the behavior of the system under study and the user experiments with the model to infer this behavior. This general framework has proven a powerful adjunct to learning, problem solving, design, and control. In this tutorial, we focus principally on discrete-event simulation – its underlying concepts, structure, and application.
pdf
Introductory Tutorials
Tutorial: Metamodeling for Simulation
Chair: Giulia Pedrielli (Arizona State University)
Russell R. Barton (Pennsylvania State University)
Abstract Abstract
Metamodels are simpler computer models that are designed to mimic the input-output behavior of discrete-event or other complex simulation models. They are models of models, thus the name (provided by Jack P. C. Kleijnen). This introductory tutorial will highlight uses of metamodels, commonly used metamodel types, the linkage between metamodel type and the set of simulation model runs used to fit the metamodel, and basic issues in building and validating metamodels. The tutorial ends with a brief summary of recent research in metamodel types and use, and the implications for future metamodeling applications.
pdf
Introductory Tutorials
Statistical Analysis of Simulation Output Data: The Practical State of the Art
Chair: John Chavis (Cornell University)
Averill Law (Averill M. Law & Associates, Inc.)
Abstract Abstract
One of the most important but neglected aspects of a simulation study is the proper design and analysis of simulation experiments. In this tutorial we give a state-of-the-art presentation of what the practitioner really needs to know to be successful. We will discuss how to choose the simulation run length, the warmup-period duration (if any), and the required number of model replications (each using different random numbers). The talk concludes with a discussion of three critical pitfalls in simulation output-data analysis.
pdf
Introductory Tutorials
Work Smarter, Not Harder: A Tutorial on Designing and Conducting Simulation Experiments
Chair: Saurabh Jain (The University of Arizona)
Susan M. Sanchez and Paul J. Sanchez (Naval Postgraduate School) and Hong Wan (North Carolina State University)
Abstract Abstract
Simulation models are integral to modern scientific research, national defense, industry and manufacturing, and in public policy debates. These models tend to be extremely complex, often with thousands of factors and many sources of uncertainty. To understand the impact of these factors and their interactions on model outcomes requires efficient, high-dimensional design of experiments. Unfortunately, all too often, many large-scale simulation models continue to be explored in ad hoc ways. This suggests that more simulation researchers and practitioners need to be aware of the power of designed experiments in order to get the most from their simulation
studies. In this tutorial, we demonstrate the basic concepts important for designing and conducting simulation experiments, and provide references to other resources for those wishing to learn more. This tutorial (an update of previous WSC tutorials) will prepare you to make your next simulation study a simulation experiment.
pdf
Introductory Tutorials
Tested Success Tips for Simulation Project Excellence
Chair: Thomas Voss (Leuphana Universität)
David T. Sturrock (Simio LLC)
Abstract Abstract
How can you make your projects successful? Modeling can certainly be fun, but it can also be quite challenging. With the new demands of Smart Factories, Digital Twins, and Digital Transformation, the challenges multiply. You want your first and every project to be successful, so you can justify continued work. Unfortunately, a simulation project is much more than simply building a model -- the skills required for success go well beyond knowing a particular simulation tool. A 35-year veteran who has done hundreds of successful projects shares important insights to enable project success. He also shares some cautions and tips to help avoid common traps leading to failure and frustration.
pdf
Introductory Tutorials
An Introduction to Modular Modeling and Simulation with PythonPDEVS and the Building-Block Library PythonPDEVS-BBL
Chair: Anastasia Anagnostou (Brunel University London)
Yentl Van Tendeloo, Randy Paredis, and Hans Vangheluwe (University of Antwerp)
Abstract Abstract
The Discrete Event System Specification (DEVS) is a popular formalism for modeling complex dynamic systems using a discrete event abstraction. At this abstraction level, a timed sequence of pertinent ``events'' input to a system (or internal timeouts) cause instantaneous changes to the state of the system. Main advantages of DEVS are its rigorous formal definition, and its support for modular composition.
This tutorial introduces the Classic DEVS formalism in a bottom-up fashion, using a simple traffic light example.
The syntax and operational semantics of Atomic models are introduced first. Coupled models are introduced to structure models.
We continue to actual applications of DEVS, with an example in performance analysis of queueing systems. As this example is built up of commonly used generators, queues, etc. we use a DEVS Building Block Library to model the example.
We conclude with further reading on DEVS theory, DEVS variants, and DEVS tools.
pdf
Introductory Tutorials
Developing High-Quality Microsimulation Models Using R in Health Decision Sciences
Chair: Chukwudi Nwogu (Brunel University London)
Heesun Eom and Yan Li (Icahn School of Medicine at Mount Sinai)
Abstract Abstract
Health decision science is a growing field that studies the use of population health data and advanced analytical tools to inform decisions. This paper describes several modeling approaches and programming languages widely used in health decision sciences. Special emphasis is put on the development of microsimulation models using R. A recent microsimulation model—Simulation for Health Improvement and Equity (SHINE) Model—is described to demonstrate the development of microsimulaiton models using R. Several practical recommendations for developing microsimulation models using R are proposed. This paper may serve as a practical guide for population health scientists and healthcare professionals to develop their own microsimulation models to inform complex health decisions.
pdf
Introductory Tutorials
A Tutorial Introduction to Monte Carlo Tree Search
Chair: Szu Hui Ng (National University of Singapore)
Michael Fu (University of Maryland)
Abstract Abstract
This tutorial provides an introduction to Monte Carlo tree search (MCTS), which is a general approach to solving sequential decision-making problems under uncertainty using stochastic (Monte Carlo) simulation. MCTS is most famous for its role in Google DeepMind's AlphaZero,
the recent successor to AlphaGo, which defeated the (human) world Go champion Lee Sedol in 2016 and world \#1 Go player Ke Jie in 2017. Starting from scratch without using any domain-specific knowledge (other than the rules of the game), AlphaZero was able to defeat not only its predecessors in Go but also the best AI computer programs in chess (Stockfish) and shogi (Elmo), using just 24 hours of training based on MCTS and reinforcement learning. We demonstrate the basic mechanics of MCTS via decision trees and the game of tic-tac-toe.
pdf
Logistics, Supply Chains, and Transportation
Logistics, Supply Chains, and Transportation
Safe and Efficient Public Movement on Foot or by Bus
Chair: Edward Williams (PMC)
Pedestrian Behavior at Intersections: A Literature Review of Models and Simulation Recommendations
Chenyu Tian (Tsinghua University, Tsinghua-Berkeley Shenzhen Institute) and Wai Kin Victor Chan and Yi Zhang (Tsinghua-Berkeley Shenzhen Institute)
Abstract Abstract
Understanding the behavior of pedestrians at intersections can help to improve the efficiency and safety in urban traffic systems and has increasingly drawn the attention of the transportation industry. Pedestrian behavior and movement are of high uncertainty and difficult to analyze, not only because of the individual characteristics, but also the interaction with vehicles and infrastructures. This study specifically investigates various modeling studies on pedestrian behavior at intersections. Insights are provided regarding the inputs, algorithms, and application scenarios. Also, this study identifies limitations in the existing traffic simulation tools involving pedestrians and provides recommendations for addressing these issues in future research. The modeling and simulation of the interaction among vehicles and pedestrians at intersections are open challenges and can be used as helpful tools to boost the development of Advanced Driver Assistance Systems (ADAS) as well as intelligent intersections.
pdf
Microsimulation of Bus Terminals: A Case Study From Stockholm
Therese Lindberg (The Swedish National Road and Transport Research Institute (VTI), Linköping University); Fredrik Johansson (The Swedish National Road and Transport Research Institute (VTI)); Anders Peterson (Linköping University); and Andreas Tapani (Swedish Transport Agency)
Abstract Abstract
When new bus terminals are being planned or existing ones redesigned, suitable tools that are able to describe the complex situation at a terminal are needed. Using microsimulation, vehicle movements and interactions can be simulated and the congestion and capacity of a terminal can be evaluated. In this study, a discrete event simulation model is used in a case study of the Slussen bus terminal in Stockholm, Sweden. The model is calibrated and validated with empirical data that are automatically collected at the terminal. Already with this limited amount of data, the parameter time per boarding passenger can be calibrated with a relative error less than 1% and the validation gives further insights into the data needed for calibration of a terminal simulation model.
pdf
Logistics, Supply Chains, and Transportation
Last Mile Logistics
Chair: Jesus Gonzalez-Feliu (Institut Henri Fayol)
Towards A More Sustainable Future? Simulating The Environmental Impact of Online and Offline Grocery Supply Chains
Maik Trott, Christoph von Viebahn, and Marvin Auf der Landwehr (Hochschule Hannover)
Abstract Abstract
The negative effects of traffic, such as air quality problems and road congestion, put a strain on the infrastructure of cities and high-populated areas. A potential measure to reduce these negative effects are grocery home deliveries (e-grocery), which can bundle driving activities and, hence, result in decreased traffic and related emission outputs. Several studies have investigated the potential impact of e-grocery on traffic in various last-mile contexts. However, no holistic view on the sustainability of e-grocery across the entire supply chain has yet been proposed. Therefore, this paper presents an agent-based simulation to assess the impact of the e-grocery supply chain compared to the stationary one in terms of mileage and different emission outputs. The simulation shows that a high e-grocery utilization rate can aid in decreasing total driving distances by up to 255 % relative to the optimal value as well as CO2 emissions by up to 50 %.
pdf
A Simulation-Optimization Approach for Locating Automated Parcel Lockers in Urban Logistics Operations
Markus Rabe and Jorge Chicaiza-Vaca (Technological University Dublin); Rafael D. Tordecilla (Universitat Oberta de Catalunya, Universidad de La Sabana); and Angel A. Juan (Universitat Oberta de Catalunya, Euncet Business School)
Abstract Abstract
Experts propose using an automated parcel locker (APL) for improving urban logistics operations. However, deciding the location of these APLs is not a trivial task, especially when considering a multi-period horizon under uncertainty. Based on a case study developed in Dortmund, Germany, we propose a simulation-optimization approach that integrates a system dynamics simulation model with a multi-period capacitated facility location problem (CFLP). First, we built the causal-loop and stock-flow diagrams to show the APL system's main components and interdependencies. Then, we formulated a multi-period CFLP model to provide the optimal number of APLs to be installed in each period. Finally, Monte Carlo simulation was used to estimate the cost and reliability level for different scenarios with random demands. In our experiments, only one solution reaches a 100% reliability level, with a total cost of 2.7 million euros. Nevertheless, if the budget is lower, our approach offers other good alternatives.
pdf
Agent-Based Simulation Improves E-Grocery Deliveries Using Horizontal Cooperation
Adrian Serrano-Hernandez (Public University of Navarre, Institute of Smart Cities); Rocio de la Torre (Public University of Navarre, INARBE Institute); Luis Cadarso (Rey Juan Carlos University, EIATA Institute); and Javier Faulin (Public University of Navarre, Institute of Smart Cities)
Abstract Abstract
E-commerce has increased tremendously in recent decades because of improvements in the information and telecommunications technology along with changes in societal lifestyles. More recently, e-grocery (groceries purchased online) including fresh vegetables and fruit, is gaining importance as the most-efficient delivery system in terms of cost and time. In this respect, we evaluate the effect of cooperation-based policies on service quality among different supermarkets in Pamplona, Spain. Concerning the methodology, we deploy, firstly, a detailed survey in Pamplona in order to model e-grocery demand patterns. Secondly, we develop an agent-based simulation model for generating scenarios in cooperative and non-cooperative
settings, considering the real data obtained from the survey analysis. Thus, a Vehicle Routing Problem is dynamically generated and solved within the simulation framework using a biased-randomization algorithm. Finally, the results show significant reductions in lead times and better customer satisfaction when employing horizontal cooperation in e-grocery distribution.
pdf
Logistics, Supply Chains, and Transportation
Simheuristics for Vehicle Routing
Chair: Angel A. Juan (IN3-Open University of Catalonia (UOC), IN3)
A Simheuristic-Learnheuristic Algorithm for the Stochastic Team Orienteering Problem with Dynamic Rewards
Christopher Bayliss, Pedro J. Copado-Mendez, Javier Panadero, Angel A. Juan, and Leandro Do C. Martins (UOC (Universitat Oberta de Catalunya))
Abstract Abstract
In this paper we consider the stochastic team orienteering problem (STOP) with dynamic rewards and stochastic travel times. In the STOP, the goal is to generate routes for a fixed set of vehicles such that the sum of the rewards collected is maximised whilst ensuring that nodes are visited before a fixed time limit expires. The rewards associated with each node are dependent upon the times at which they are visited. Also, the dynamic reward values have to be learned from simulation experiments during the search process. To solve this problem we propose a biased-randomised heuristic (BRH), which integrates a learning module and a simulation model within a learnheuristic algorithm (BRLH). Randomisation is important for generating a wide variety of solutions which capture the trade off between reward and reliability. A series of computational experiments are carried out in order to analyse the performance of our BRLH approach.
pdf
A Simheuristic Algorithm for the Location Routing Problem with Facility Sizing Decisions and Stochastic Demands
Rafael D. Tordecilla (Universitat Oberta de Catalunya, Universidad de La Sabana); Javier Panadero (Universitat Oberta de Catalunya); Carlos L. Quintero-Araujo and Jairo R. Montoya-Torres (Universidad de La Sabana); and Angel A. Juan (Universitat Oberta de Catalunya)
Abstract Abstract
The location routing is a well known problem in which decisions about facility location and vehicle routing must be made. Traditionally, a fixed size or capacity is assigned to an open facility and this size is a unique input parameter. However, real-world cases show that decision makers usually have a set of size alternatives. If this size is selected accurately according to the demand of allocated customers, then location decisions and routing activities would incur smaller costs. Nevertheless, choosing this size implies additional variables that make an already NP-hard problem even more challenging. In addition, considering stochastic demands contribute to make the optimization problem more difficult to solve. Hence, a simheuristic algorithm is proposed in this work. It combines the efficiency of metaheuristics and the capabilities of simulation to deal with uncertainty. A series of computational experiments show that our approach can efficiently deal with medium-large instances.
pdf
A Simheuristic for the Stochastic Two-echelon Capacitated Vehicle Routing Problem
Angie Ramírez-Villamil and Jairo R. Montoya-Torres (Universidad de La Sabana) and Anicia Jaegler (Kedge Business School)
Abstract Abstract
Two-echelon distribution systems are very common in last-mile supply chains and urban logistics systems. The problem consists on delivering goods from one depot to a set of satellites usually located outside urban areas and from there to a set of geographically dispersed customers. This problem is modeled as a two-echelon vehicle routing problem (2E-VRP), which is known to be computationally difficult to solve. This paper proposes a solution approach based on optimization-simulation to solve the 2E-VRP with stochastic travel times. As objective function, this paper considers the minimization of travel times. The efficiency of the solution approach is analyzed against the solution of the deterministic counterpart which is solved using both exact and approximation approaches. The impact of adding stochastic travel speed on the objective function is evaluated through simulation. Experiments are run using real data of some convenience stores in the city of Bogota, Colombia.
pdf
Logistics, Supply Chains, and Transportation
Warehousing Optimization Techniques
Chair: Carles Serrat (Universitat Politècnica de Catalunya-BarcelonaTECH)
Gravity Clustering: a Correlated Storage Location Assignment Problem Approach
Mohammadnaser Ansari and Jeffrey S. Smith (Auburn University)
Abstract Abstract
Warehouses and warehouse-related operations have long been a field of interest for researchers. One of the areas that researchers focus on is the Storage Location Assigning Problem (SLAP or Slotting). The goal in this field is to find the best location in a warehouse to store the products. With the current COVID-19 pandemic, there is a shopping paradigm shift towards e-commerce, which even after the pandemic will not return to the old state. This paradigm shift raises the need for better performing multi-pick warehouses. In this paper, we propose a clustering method based on the gravity model. We show that for warehouses in which there is more than one pick per trip, our proposed method improves the performance.
pdf
Comparison of Deadlock Handling Strategies for different Warehouse Layouts with an AGVS
Marcel Müller and Jan Hendrik Ulrich (Otto von Guericke University Magdeburg), Lorena Silvana Reyes-Rubiano (University of La Sabana), Tobias Reggelin (Otto von Guericke University Magdeburg), and Sebastian Lang (Fraunhofer Institute for Factory Operation and Automation IFF)
Abstract Abstract
Automated guided vehicles (AGVs) form a large and important part of logistic systems to improve productivity and reduce costs. When multiple AGVs are running in limited and uncertain environments, lots of issues can occur, such as collisions and deadlocks, which need to be addressed. This paper presents a flexible simulation model for a warehouse with various AGVs. We implemented all three typical strategies to handle deadlocks: prevention, avoidance and detection and resolution. The results show that there is no dominant strategy and that the results strongly depend on the individual case and the input parameters.
pdf
Logistics, Supply Chains, and Transportation
Air Transport
Chair: Javier Faulin (Public University of Navarre, Institute of Smart Cities)
Integration of Physical Simulations in Static Stability Assessments for Pallet Loading in Air Cargo
Philipp Gabriel Mazur, No-San Lee, and Detlef Schoder (University of Cologne)
Abstract Abstract
In the air cargo context, pallet loading faces substantial constraints and item heterogeneity. The stability constraint in the pallet loading problem is highly important due to its impact on the efficiency, security, and resulting costs of an air cargo company. In information systems that support pallet loading, physical simulations provide a realistic approximation of a pallet’s stability. However, current approaches neglect the opportunity to integrate physical simulations in underlying solvers. In this research, we propose and compare two approaches for integrating a physical simulation as a fixed component of the problem-solving heuristic and include irregular shapes. Our results achieve runtimes that meet air cargo requirements; therefore, assumptions about the cargo, e.g., shape assumptions, can be relaxed.
pdf
An Agile Simheuristic for the Stochastic Team Task Assignment and Orienteering Problem: Applications to Unmanned Aerial Vehicles
Javier Panadero and Angel A. Juan (Universitat Oberta de Catalunya-IN3), Manel Grifoll (Universitat Politècnica de Catalunya-BarcelonaTECH), Mohammad Dehghanimohamamdabadi (Northeastern University), Alfons Freixes (Euncet Business School), and Carles Serrat (Universitat Politècnica de Catalunya-BarcelonaTECH)
Abstract Abstract
Efficient coordination of unmanned aerial vehicles (UAVs) requires the solving of challenging operational problems. One of them is the integrated team task assignment and orienteering problem (TAOP). The TAOP can be seen as an extension of the well-known team orienteering problem (TOP). In the classical TOP, a homogeneous fleet of UAVs has to select and visit a subset of customers in order to maximize, subject to a maximum travel time per route, the total reward obtained from these visits. In the TAOP, a number of different tasks (customer services) have to be assigned to a fleet of heterogeneous UAVs, while also determining the best routing plan for covering these services. Since factors such as weather conditions might influence travel times, these are modeled as random variables. Reliability issues are also considered, since random times might prevent a route from being successfully completed before a UAV runs out of battery.
pdf
A Simheuristic Approach for Robust Scheduling of Airport Turnaround Teams
Yagmur Simge Gök and Maurizio Tomasella (University of Edinburgh); Daniel Guimarans (Amazon); and Cemalettin Ozturk (Raytheon Technologies, United Technologies Research Center Ireland)
Abstract Abstract
The problem of developing robust daily schedules for the teams turning around aircraft at airports has been recently approached through an efficient combination of project scheduling and vehicle routing models, and solved jointly by constraint programming and mixed integer programming solvers, organized in a matheuristic approach based on large neighborhood search. Therein, robustness is achieved through optimally allocating time windows to tasks, as well as allocating slack times to the routes to be followed by each team throughout their working shift. We enhance that approach by integrating discrete-event simulation within a simheuristic scheme, whereby results from simulation provide constructive feedback to improve the overall robustness of the plan. This is achieved as a trade-off between the interests of each separate turnaround service provider and that of the airport as a whole. Numerical experiments show the applicability of the developed approach as a decision support mechanism in any airport.
pdf
Logistics, Supply Chains, and Transportation
Health and Humanitarian Logistics
Chair: Canan Gunes Corlu (Boston University)
Scenario-based Simulation Approach for an Integrated Inventory Blood Supply Chain System
Mohammad Arani (University of Arkansas at Little Rock), Saeed Abdolmaleki (Shahid Beheshti University), and Xian Liu (University of Arkansas at Little Rock)
Abstract Abstract
This study conducts a comparison between a newly proposed integrated inventory blood supply chain (BSC) and the current practice of the blood product distribution system. The importance of a well-designed blood supply chain is indisputable when it comes to human lives and the valuable blood products of the BSC. Our discrete event simulation approach and scenario discussion encompass a set of operational decisions to manage the complexity of the system. Applying the Arena simulation package, we model a carefully designed blood supply chain to provide a critical comparison of the two primary Key Performance Indicators shortage and outdated units of the BSC. We conclude that the proposed method would bring straightforward improvement with respect to the indicators under study.
pdf
A Simulation Model for Short and Long Term Humanitarian Supply Chain Operations Management
Marilène Cherkesly and Yasmina Maïzi (École des sciences de la gestion, Université du Québec à Montréal)
Abstract Abstract
Traditionally, the design of supply chains for humanitarian operations has been developed distinctly for the different disaster management phases, with little attention to the relief to development continuum. For the immediate response phase, this design has an emphasis on speed, whereas for the reconstruction phase, it has an emphasis on cost reduction. In this paper, we develop a sustainable humanitarian supply chain network for the relief-to-development continuum. Hence, this network ensures an effective and smooth transition from response to reconstruction operations. We develop three network structures that integrate the lean and agile principles to different extents. To determine the best characteristics of such a sustainable supply chain, we use discrete event simulation modeling. We validate and compare each network structure through several scenarios fed by data sets available from the United Nations World Food Programme for operations conducted in the Republic of Congo.
pdf
A Simulation Framework for UAV-Aided Humanitarian Logistics
Robert van Steenbergen and Martijn Mes (University of Twente)
Abstract Abstract
This paper presents a generic simulation framework for the evaluation of humanitarian logistics, which can easily be tailored to a specific disaster scenario. Within the framework, various methods can be implemented for the planning and control of vehicles in affected areas. The aim is to explore how emerging technologies, specifically Unmanned Aerial Vehicles (UAVs), can contribute to humanitarian logistics. We illustrate the application of the framework by modeling the distribution of relief goods after the earthquake on Sulawesi, Indonesia in 2018.
pdf
Logistics, Supply Chains, and Transportation
Transport
Chair: Ralf Elbert (Technische Universität Darmstadt)
Mixing it up: Simulation of Mixed Traffic Container Terminals
Berry Gerrits, Martijn Mes, and Peter Schuur (University of Twente)
Abstract Abstract
The development from a completely manual brownfield terminal towards a highly automated one proceeds in a number of steps. To take the first steps it is crucial to know how introducing Automated Yard Tractors (AYTs) influences port productivity at Mixed-Traffic Terminals (MTTs). In these terminals, (non-automated) road trucks and yard tractors share the same infrastructure. This paper employs discrete-event simulation in order to analyze the performance of a brownfield terminal where manual yard tractors are replaced by AYTs. Our results look promising. We are able to reach high utilization rates in MTTs, with a modest number of AYTs. In fact, with the same amount of yard tractors as are typically used in comparable terminals with manually operated vehicles. Special attention is paid to the influence of road trucks on terminal congestion during peak hours.
pdf
Simulation-Based Analysis Of A Cross-Actor Pallet Exchange Platform
Ralf Elbert and Roland Lehner (Technische Universität Darmstadt)
Abstract Abstract
Pallets are returnable transport items and of great importance for supply chains. They ensure efficient storage, transport, and handling processes. The pallet cycle, however, is associated with a substantial effort. In addition to administrative costs, extra trips and detours must often be taken by forwarders to retrieve pallets or buy new pallets. In this paper, a fictitious cross-actor pallet exchange platform is analyzed, which manages pallet debts and receivables between the different actors of a supply chain. A claim transfer is performed, and the actors no longer owe pallets to each other, but to the system. This provides greater flexibility, as actors with open claims can collect pallets from all actors that have a negative balance according to the system. Our analysis shows that with such a system, additional trips can be reduced by 70 %, thus making the management of pallets more efficient.
pdf
Effects of Terminal Size, Yard Block Assignment, and Dispatching Methods on Container Terminal Performance
Anne Kathrina Schwientek, Ann-Kathrin Lange, and Carlos Jahn (Hamburg University of Technology)
Abstract Abstract
Given the growth in ship size and increasing demands, it is essential for seaport container terminals to make sound tactical and operational decisions. Here, horizontal transport is an important element of container terminals, connecting seaside handling with container storage. An efficient design of horizontal transport, especially the assignment of vehicles to orders, strongly influences the terminal performance. Most existing scientific studies vary only individual parameters or dispatching strategies neglecting different terminal targets. This study approaches this research gap. In a discrete-event simulation model, the influence of various terminal parameters on dispatching strategies is examined, taking into account the terminal targets. The two most influencing terminal parameters, terminal size and yard block assignment to containers, are analyzed in detail. The results show that the best choice of yard block assignment and dispatching method for a given terminal size depends on the combination of both parameters and the aspired targets.
pdf
Logistics, Supply Chains, and Transportation
Material Supply
Chair: Bahar Biller (SAS Institute, Inc)
A Mixed-Integer Formulation to Optimize the Resupply of Components for the Installation of Offshore Wind Farms
Daniel Rippel, Nicolas Jathe, and Michael Lütjen (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen) and Michael Freitag (University of Bremen)
Abstract Abstract
Over the last decade, offshore wind energy has become a viable source of sustainable energy. During the installation of offshore wind farms, the dimensioning of related base ports became increasingly important. Base ports act as central logistics hubs for the installation and, thus, need to provide sufficient space and equipment to store and handle required components. Thereby, current developments tend to larger and heavier components as well as to an increase in simultaneously occurring installation and decommissioning projects. This article proposes a Mixed-Integer formulation for the optimization of supply deliveries to the base port. Moreover, it presents an example of this formulation’s integration with an optimization of the base port capacity and, furthermore, a simulation study on the effects of different resupply cycles on the efficiency of installation projects. Results show that the resupply cycle has a minor influence on the project's efficiency but highly affects the base port capacity.
pdf
Visualising the Impact of Early Design Decisions on a Modular Housing Supply Network
Victor Guang Shi (AMRC, University of Sheffield); Alison McKay (University of Leeds); Anthony Waller (Lanner Group); Ruby Hughes (AMRC, University of Sheffield); and Richard Chittenden (University of Leeds)
Abstract Abstract
Increasingly, modular housing manufacturers choose to compete by becoming system integrators. The transition to this new business model requires companies to develop system-level knowledge that ‘they know more than they produce’ in managing supply networks. Engineering system design tools, through visualizing supply network design and make processes, can help companies to overcome many uncertainties when they choose to become system integrators within the supply network. This paper illustrates how simulation models can be derived from design characteristics, experiential data, and a pragmatic system engineering design ‘vee’ framework. This helps modular housing managers to observe and compare the risks of different products and supply network configurations, and see how behavior changes of individual organizations impact system-level performance. A modular housing case study illustrates the implementation and benefits of our approach.
pdf
Product Life Cycle Perspective on ICT Product Supply Chain Resilience
Janis Grabis (Riga Technical University)
Abstract Abstract
Resilient supply chains are designed and operated to deal with disruptive events in an efficient manner. In ICT product supply chains, disruptions are often observed as vulnerabilities discovered in components used in the products. The vulnerabilities can be associated with the components themselves as well as with suppliers, and they can be averted by patching and supply chain reconfiguration. The paper elaborates a simulation model for analyzing relationships between the cost of treating the vulnerabilities and the supply chain configuration. We show that flexible supply chain configurations have the lowest cost and are the most resilient to vulnerabilities. The vulnerabilities associated with suppliers, for example, due to the loss of their trustworthiness, cause more-significant fluctuations in supply chain performance than the vulnerabilities associated with individual components. The monitoring cost have a significant impact on the selection of the most-resilient configuration.
pdf
Logistics, Supply Chains, and Transportation
Planning under Uncertainty
Chair: Bhakti Stephan Onggo (University of Southampton)
Local Search and Tabu Search Algorithms for Machine Scheduling of a Hybrid Flow Shop Under Uncertainty
Christin Schumacher, Peter Buchholz, Kevin Fiedler, and Nico Gorecki (Technische Universität Dortmund)
Abstract Abstract
In production systems, scheduling problems need to be solved under complex environmental conditions. In this paper, we present a comprehensive scheduling approach that is applicable in real industrial environments. To cope with the parameter uncertainty of real world problems, forecasting, classification and simulation techniques are combined with heuristic optimization algorithms. Thus, the approach allows to identify and include demand fluctuations, and scrap rates, and offers a selection of suitable schedules depending on particular demand constellations in scheduling. Furthermore, we adapt seven optimization algorithms for two-stage hybrid flow shops with unrelated machines, machine qualifications, and skipping stages with the objective to minimize the makespan. The combination of methods is validated on a real production case of the automobile industry. The paper shows for the application case that metaheuristics provide significantly better results than SPT and safety factors, above a certain size, can reduce their effect preventing incomplete demand positions.
pdf
Selective Pick-up and Delivery Problem: A Simheuristic Approach
Tejas Ghorpade (Indian Institute of Technology, Boston University) and Canan Gunes Corlu (Boston University)
Abstract Abstract
One-commodity pick-up and delivery traveling salesman problem (1-PDTSP) concerns the transportation of single-type goods that are picked-up from supply locations to be delivered to the demand points while minimizing the transportation costs. A variant of the 1-PDTSP is selective pick-up and delivery problem (SPDP), which relaxes the requirement that all pick-up locations need to be visited. SPDP is applicable in several areas including food redistribution operations, where excess edible foods from restaurants and food vendors are collected and delivered to food banks or meal centers, where it can be made available to those in need. Because SPDP is a NP-hard problem, metaheuristic algorithms have been proposed in
the literature to solve it. However, these algorithms make the assumption that all inputs are deterministic, which may not be the case in practice. This paper considers stochastic SPDP and proposes a simheuristic algorithm which integrates a GRASP metaheuristic with Monte Carlo simulation.
pdf
Investigating Brexit Implications on the Irish Agri-Food Exports: A Simulation-Based Scenario Mapping Model
Amr Mahfouz, Rishi Choudhary, and John Crowe (Technological University Dublin); Aly Owida (Arab Academy for Science, Technology and Maritime Transport); and Wael Rashwan (Technological University Dublin)
Abstract Abstract
The Irish economy is highly dependent on the UK market with a total export value surpassing €14 billion. Several reports have warned of severe bottlenecks at the Irish and British ports if new customs checks are reintroduced. A significant disruption is also expected to the traffic flow between Ireland and Britain because of the lack of proper checking infrastructure at some ports.
This situation will cause devastating impact on the competitive advantage of various Irish exports to the UK market, particularly limited-shelf-life products.
Hence, a simulation model has been developed to investigate three Brexit scenarios: 1) applying non-tariff barriers at ports, 2) replacing the UK Landbridge with direct routes to continental Europe, and 3) lack of checking infrastructure at the UK ports. The scenarios' implications on the transportation time and shelf life of Irish Cheese exports to the UK are investigated, leading to one recommendable scenario.
pdf
Manufacturing Applications
Track Coordinator - Manufacturing Applications: Christoph Laroque (University of Applied Sciences Zwickau), Guodong Shao (National Institute of Standards and Technology)
Manufacturing Applications
Semiconductor Applications
Chair: Guodong Shao (National Institute of Standards and Technology)
A Discrete-event Heuristic for Makespan Optimization in Multi-server Flow-shop Problems with Machine Re-entering
Angel A. Juan (Universitat Oberta de Catalunya), Christoph Laroque (University of Applied Sciences Zwickau), Pedro Copado and Javier Panadero (Universitat Oberta de Catalunya), and Rocio de la Torre (INARBE Institute)
Abstract Abstract
Modern Manufacturing is determined by customer-specific products, that are to be delivered in given lead times and due-dates. Many of these systems can be modeled as flow-shops where some of the processes can handle jobs on parallel machines. In addition, complex manufacturing environments contain specific machine loops or re-entry cycles where jobs re-enter specific processes. A specific server is assigned to a job the first time it visits a machine, and this job has to be processed by exactly the same server if it re-visits the machine. With the goal of minimizing the makespan, this paper analyzes this complex flow-shop setting and proposes an original discrete-event heuristic for solving it in short computing times. Our algorithm combines biased (non-uniform) randomization strategies with the use of a discrete-event list, which iteratively processes as the simulation clock advances. A series of computational experiments contribute to illustrate the potential of our methodology.
pdf
Simulation-based Evaluation of Lot Release Policies in a Power Semiconductor Facility - A Case Study
Henriette Allgeier, Christian Flechsig, Jacob Lohmer, and Rainer Lasch (Technische Universität Dresden) and Germar Schneider and Benjamin Zettler (Infineon Technologies Dresden GmbH)
Abstract Abstract
Lot release policies, i.e., the decision which lots to start in production, in what quantity and at what time, have a significant influence on fab performance. Recent research focused on closed-loop policies. However, most studies only demonstrated the feasibility in settings with low-mix or low-volume simulation testbeds. In this paper, we focus on a real-world pre-assembly facility in a high-volume and high-mix semiconductor wafer fab. We conduct an in-depth, deterministic discrete-event simulation in two stages, using real production data and demands. First, we test two existing open-loop lot release policies (random and constant release) against a simple closed-loop release policy. Significant improvements in on-time delivery, bottleneck utilization, and throughput are notable. Second, we compare three closed-loop release policies and indicate which policy provides the best results for certain KPIs like enhanced on-time delivery or reduced tardiness.
pdf
Manufacturing Applications
Simulation and Optimization in Manufacturing
Chair: Klaus Altendorfer (Upper Austrian University of Applied Science)
Multi-Level Optimization With Aggregated Discrete-Event Models
Simon Lidberg (University of Skövde, Volvo Car Corporation) and Tehseen Aslam and Amos H.C. Ng (University of Skövde)
Abstract Abstract
Removing bottlenecks that restrain the overall performance of a factory can give companies a competitive edge. Although in principle, it is possible to connect multiple detailed discrete-event simulation models to form a complete factory model, it could be too computationally expensive, especially if the connected models are used for simulation-based optimizations. Observing that computational speed of running a simulation model can be significantly reduced by aggregating multiple line-level models into an aggregated factory level, this paper investigates, with some loss of detail, if the identified bottleneck information from an aggregated factory model, in terms of which parameters to improve, would be useful and accurate enough when compared to the bottleneck information obtained with some detailed connected line-level models. The results from a real-world, multi-level industrial application study have demonstrated the feasibility of this approach, showing that the aggregation method can represent the underlying detailed line-level model for bottleneck analysis.
pdf
Simulation-based Multi-objective Optimization for Reconfigurable Manufacturing System Configurations Analysis
Carlos Alberto Barrera Diaz (University of Skövde), Erik Flores Garcia (KTH Royal Institute of Technology), Tehseen Aslam and Amos H.C Ng (University of Skövde), and Magnus Wiktorsson (KTH Royal Institute of Technology)
Abstract Abstract
The purpose of this study is to analyze the use of Simulation-Based Multi-Objective Optimization (SMO) for Reconfigurable Manufacturing System Configuration Analysis (RMS-CA). In doing so, this study addresses the need for efficiently performing RMS-CA with respect to the limited time for decision-making in the industry, and investigates one of the salient problems of RMS-CA: determining the minimum number of machines necessary to satisfy demand. The study adopts an NSGA II optimization algorithm and presents two contributions to existing literature. Firstly, the study proposes a series of steps for the use of SMO for RMS-CA and shows how to simultaneously maximize production throughput, minimize lead time, and buffer size. Secondly, the study presents a comparison between prior work in RMS-CA and the proposed use of SMO. The study discusses the advantages and challenges of using SMO and provides critical insight for production engineers and managers responsible for production system configuration.
pdf
On the Use of Simheuristics to Optimize Safety-Stock Levels in Material Requirements Planning with Random Demands
Barry Barrios, Angel Juan, and Javier Panadero (Universitat Oberta de Catalunya – IN3); Alejandro Estrada-Moreno (Universitat Rovira i Virgili); and Klaus Altendorfer and Andreas J. Peirleitner (University of Applied Sciences Upper Austria)
Abstract Abstract
Material requirements planning (MRP) integrates the planning of production, scheduling, and inventory activities in a manufacturing process. Many approaches to MRP management focus either on the simulation of the system (without considering optimization aspects) or in its optimization (without considering stochastic aspects). This paper analyzes a MRP version in which the demand of final products in each period is a random variable. The goal is then to find the optimal safety-stock configuration of both the product and the parts, i.e.: the configuration that minimizes the expected total cost. This total cost is given by: (i) the inventory cost; and (ii) a penalty cost generated by the occurrence of stock outs. To solve this stochastic optimization problem, a spreadsheet simulation model is proposed and a heuristic procedure is employed over it. A numerical example illustrates the main concepts of the proposed approach as well as its potential.
pdf
Manufacturing Applications
Simulation and AI I
Chair: Daniel Nåfors (Chalmers University of Technology)
Deep Q-Network Model for Dynamic Job Shop Scheduling Problem Based on Discrete Event Simulation
Yakup none Turgut and Cafer Erhan Bozdag (Istanbul Technical University, none)
Abstract Abstract
In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.
pdf
Machine Learning (Reinforcement Learning)-based Steel Stock Yard Planning Algorithm
Jong Hun Woo, Young In Cho, Sang Hyun Yu, So Hyun Nam, Haoyu Zhu, and Dong Hoon Kwak (Seoul National University) and Jong-Ho Nam (Korea maritime and ocean university)
Abstract Abstract
Steel plates are supplied to shipyards without adhering to the processing schedule because they are ordered in bulk according to the steel market condition and the mid- to long-term production strategy. Hence, steel plates are stacked up in the steel stock yard until they are input to the processing system and then supplied sequentially according to the processing start date. Currently, a steel stockyard, is operated by the experience of field workers, and inefficiencies such as excessive crane use are occurring, so efficient management techniques are required. However, the conventional optimization algorithm has limitations because the input timing of the steel is random. In this study, a study was conducted to determine the order of steel input into steel stock yard using reinforcement learning algorithm. Effective algorithm(A3C) can be identified through tests, and it was validated that proposed method is effective for problems of actual size steel stock yard.
pdf
Simulation Evaluation of Automated Forecast Error Correction Based on Mean Percentage Error
Sarah Zeiml and Ulrich Seiler (University of Applied Sciences Upper Austria), Thomas Felberbauer (St. Pölten University of Applied Sciences), and Kaus Altendorfer (University of Applied Sciences Upper Austria)
Abstract Abstract
A supplier-customer relationship is studied in this paper, where the customer provides demand forecasts that are updated on a rolling horizon basis. The forecasts show systematic and unsystematic errors related to periods before delivery. The paper presents a decision model to decide whether a recently presented forecast correction model should be applied or not. The introduced dynamic correction model is evaluated for different market scenarios, i.e., seasonal demand with periods with significantly higher or lower demand, and changing planning behaviors, where the systematic bias changes over time. The study shows that the application of the developed dynamic forecast correction model leads to significant forecast quality improvement. However, if no systematic forecast bias occurs, the correction reduces forecast accuracy.
pdf
Manufacturing Applications
Simulation and AI II
Chair: Thomas Felberbauer (St. Pölten University of Applied Sciences)
Scheduling Jobs in a Two-Stage Hybrid Flow Shop with a Simulation-Based Genetic Algorithm and Standard Dispatching Rules
Benjamin Rolf, Tobias Reggelin, Abdulrahman Nahhas, and Marcel Müller (Otto von Guericke University Magdeburg) and Sebastian Lang (Fraunhofer Institute for Factory Operation and Automation IFF)
Abstract Abstract
The paper proposes a simulation-based hyperheuristics approach to generate schedules for a two-stage hybrid flow shop scheduling problem with sequence-dependent setup times. The scheduling problem is derived from a company that is assembling printed circuit boards. A genetic algorithm determines sequences of standard dispatching rules that are evaluated by a discrete-event simulation model minimizing a multi-criteria objective composed of makespan and total tardiness. To reduce the computation time of the algorithm a dispatching rule-based chromosome representation is used containing a sequence of dispatching rules and time intervals in which the rules are applied. Different experiment configurations and their impact on solution quality and computation time are analyzed. The optimization model generates efficient schedules for multiple real-world data sets.
pdf
Simulation-Based Deep Reinforcement Learning for Modular Production Systems
Niclas Feldkamp, Soeren Bergmann, and Steffen Strassburger (Ilmenau University of Technology)
Abstract Abstract
Modular production systems aim to supersede the traditional line production in the automobile industry. The idea here is that highly customized products can move dynamically and autonomously through a system of flexible workstations without fixed production cycles. This approach has challenging demands regarding planning and organization of such systems. Since each product can define its way through the system freely and individually, implementing rules and heuristics that leverage the flexibility in the system in order to increase performance can be difficult in this dynamic environment. Transport tasks are usually carried out by automated guided vehicles (AGVs). Therefore, integration of AI-based control logics offer a promising alternative to manually implemented decision rules for operating the AGVs. This paper presents an approach for using reinforcement learning (RL) in combination with simulation in order to control AGVs in modular production systems. We present a case study and compare our approach to heuristic rules.
pdf
Dynamically Changing Sequencing Rules with Reinforcement Learning in a Job Shop System with Stochastic Influences
Jens Heger and Thomas Voss (Leuphana Universität)
Abstract Abstract
Sequencing operations can be difficult, especially under uncertain conditions. Applying decentral sequencing rules has been a viable option; however, no rule exists that can outperform all other rules under varying system performance. For this reason, reinforcement learning (RL) is used as a hyper heuristic to select a sequencing rule based on the system status. Based on multiple training scenarios considering stochastic influences, such as varying inter arrival time or customers changing the product mix, the advantages of RL are presented. For evaluation, the trained agents are exploited in a generic manufacturing system. The best agent trained is able to dynamically adjust sequencing rules based on system performance, thereby matching and outperforming the presumed best static sequencing rules by ~ 3%. Using the trained policy in an unknown scenario, the RL heuristic is still able to change the sequencing rule according to the system status, thereby providing a robust performance.
pdf
Manufacturing Applications
Manufacturing Applications I
Chair: Christoph Laroque (University of Applied Sciences Zwickau)
Simulation in Hybrid Digital Twins for Factory Layout Planning
Daniel Nåfors and Björn Johansson (Chalmers University of Technology), Sven Erixon (Plastal Industri AB), and Per Gullander (RISE Research Institutes of Sweden AB)
Abstract Abstract
As manufacturing companies make changes to their production system, changes to the factory layout usually follow. The layout of a factory considers the positioning of all elements in the production system, and can contribute to the overall efficiency of operations and the work environment. The process of planning factory layouts affects both installation of the changes and operation of the production system, so the effects can be utilized for a long period of time. By combining 3D laser scanning, Virtual Reality, CAD models, and simulation modelling in a hybrid digital twin, this planning process can be noticeably improved yielding benefits in all phases. This is exemplified via a novel longitudinal industrial study using participant observation to gather data. Findings from the study show that the factory layout planning process can be innovated by smart use of modern digital technologies, resulting in better solution and more informed decisions with reduced risk.
pdf
Design and Simulation of a New Biomedical Production Process
Annika Garbers, Victoria Nolletti, Krista Stanislow, and Michael Kuhl (Rochester Institute of Technology)
Abstract Abstract
This paper presents the design and analysis of a lean production system for a new biomedical technology product that has the potential to accelerate the screening process for cancer treatments. To navigate the unique constraints of the product’s manufacturing process, including the use of time-sensitive biomaterial and several steps with long processing times, simulation is utilized to analyze and compare multiple system designs and production scenarios. The final design includes a robust production schedule, a modular facility layout, and lean production control tools for daily facility operation. All of the proposed designs are compliant with regulations governing the correct handling of human tissue and other biomaterials, and guidelines from the regulatory bodies were explicitly incorporated in key decisions throughout the design and simulation modeling process.
pdf
A Heijunka Study for Automotive Assembly Using Discrete-Event Simulation: A Case Study
Ivan Arturo Renteria-Marquez, Anabel Renteria, Carmen Noemi Almeraz, and Tzu-Liang (Bill) Tseng (University of Texas at El Paso)
Abstract Abstract
The automotive manufacturing industry is constantly challenged with an unpredictable customer demand with considerable fluctuation. In addition, the number of products and the complexity is constantly growing. These characteristics make it very difficult to implement lean manufacturing tools such as production leveling (Heijunka), because it is extremely difficult to find the optimal production leveling batch size of complex systems. This paper presents a methodology to model with a high degree of accuracy the production floor, warehouse and material handling system of an automotive assembly facility through discrete-event simulation software; allowing one to determine the optimal batch size of complex manufacturing assembly system through “what-if” analysis.
pdf
Manufacturing Applications
Manufacturing Applications II
Chair: Christoph Laroque (University of Applied Sciences Zwickau)
Simulation-Aided Assessment of Team Performance: The Effects of Transient Underachievement and Knowledge Transfer
Yaileen Mendez-Vazquez (Milwaukee School of Engineering, Oregon State University); David Nembhard (Oregon State University); and Mauricio Cabrera-Rios (University of Puerto Rico Mayaguez)
Abstract Abstract
Many organizations have considered implementing teamwork as an approach to improve organizational performance and boost the learning process of workers. Despite the benefits offered by teamwork, literature has also shown negative aspects of this kind of work setting, including the transient initial team underachievement known as process loss. Studies have been dedicated to investigate the effect of implementing teamwork strategies on team productivity. However most of these studies remain observational in nature, partially due to the complexity associated with performing physical experimentation in teamwork manufacturing settings and the study of human cognition. The current study proposes the use of simulation as an strategy to conduct experimentation in this kind of setting. This work capitalizes on simulation to investigate the joint effect of knowledge transfer and process loss on team productivity for manufacturing settings. The joint effect of these factors on team productivity still remains unknown in current literature of teamwork.
pdf
Simulation-Based Performance Assessment Of A New Job-Shop Dispatching Rule For Semi-Heterarchical Industry 4.0 Architectures
Guido Guizzi, Silvestro Vespoli, Andrea Grassi, and Liberatina Carmela Santillo (Università degli Studi di Napoli Federico II - Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale)
Abstract Abstract
In recent years, the advent of the Industry 4.0, the concepts of Cyber-Physical System and Internet of Things arises, allow to shift from a classical hierarchical approach to the Manufacturing Planning and Control (MPC) system, to a new class of more decentralised architecture. This paper proposes a decentralised scheduling approach able to improve the performances of a Job-Shop production system, compliant to a semi-heterarchical Industry 4.0 architectures. To this extent, to face the increasing complexity of such a scenario, a parametric simulation model able to represent a wide number of Job Shop systems is introduced. Then, through a simulation experimental campaign, the performances of the proposed approach are assessed in function of different control parameter settings. The results showed that the Dispatching Rule Proposed (DRP) led to a significant productivity increase, showing that a semi-heterarchical architecture may be feasible and effective also in a Job-Shop production environment.
pdf
Deploying Discrete-Event Simulation and Continuous Improvement to Increase Production Rate in a Modular Construction Facility
Fatima Alsakka, Salam Khalife, Mohammad Darwish, Mohamed Al-Hussein, and Yasser Mohamed (University of Alberta)
Abstract Abstract
Aiming at continuous improvement, a modular construction company attained favorable results by implementing recommendations that were based on value stream mapping analysis. Yet, there is still a need to assess the production lines in a unified and integrated manner. As such, this study employed simulation to model five major production lines in the factory to evaluate their performance concurrently and suggest improvements. Bottlenecks were identified by tracking the waiting times at different stations, and an iterative and sequential approach was adopted. After eight suggested improvements and tested scenarios, results showed a 17.8% reduction in the unit cycle time and 22% increase in the weekly production rate. The study's major takeaway is the importance of studying improvements in an integrated manner to avoid shifting bottlenecks, achieving local improvements that do not guarantee global improvements, and underestimating the effect of minor changes on the overall process. Simulation modelling helped target these issues.
pdf
MASM: Semiconductor Manufacturing
Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Michael Hassoun (Ariel University), Lars Moench (University of Hagen)
MASM: Semiconductor Manufacturing
AMHS
Chair: anna benzoni (Mines Saint-Étienne, Univ Clermont Auvergne; STMicroelectronics)
Static AMHS Simulation Based on Planned Product Mixes
Robert Schmaler and Christian Hammel (FabFlow GmbH) and Christian Schubert (Infineon Technologies Dresden GmbH)
Abstract Abstract
Automated material handling systems (AMHS) in semiconductor fabrication plants (Fabs) are crucial to achieve a high production throughput. When upgrading an existing or planning a new Fab, production plans are used to decide on the required tool set. But what about AMHS planning? To our knowledge no method exists, given an anticipated product mix, that generates reliable transport patterns including non-productive transports. This paper outlines a methodology to generate such transport patterns, including non-productive transports for test wafers, empty FOUPs and others. The prediction is based on a preliminary product mix supplemented with scheduling rules and AMHS characteristics, yielding a deeper insight into actual AMHS capabilities and constraints during the planning phase. The results are used as input to a static simulation which will create a forecast on track utilization for a given layout in which possible AMHS bottlenecks are highlighted.
pdf
Optimizing The Allocation Of Single-Lot Stockers In An AMHS In Semiconductor Manufacturing
Lucas Aresi (Ecole des Mines de Saint-Etienne, STMicroelectronics Crolles); Stéphane Dauzère-Pérès and Claude Yugma (Mines Saint-Etienne); and Moulaye Ndiaye and Lionel Rullière (STMicroelectronics)
Abstract Abstract
This paper addresses the problem of optimally allocating single-lot stockers, also called bins, to machines in an Automated Material Handling System (AMHS) of a semiconductor wafer manufacturing facility. A Mixed Integer Linear Programming (MILP) model is proposed that assigns single-lot stockers to groups of machines performing the same types of operations. Two criteria are minimized: The maximum travel time from bins to machines and the maximum utilization of bins. An important characteristic of the problem is that the number of changes from the original allocation is limited. Computational experimental on industrial data with more than 2,000 bins and 40 machine groups are conducted. The solutions of the MILP are
analyzed with regard to the trade-off between the two criteria and the impact of the allowed number of changes.
pdf
Allocating Reticles in an Automated Stocker for Semiconductor Manufacturing Facility
Anna Benzoni (Mines Saint-Étienne, Univ Clermont Auvergne; STMicroelectronics); Claude Yugma (Mines Saint-Étienne, Univ Clermont Auvergne); and Pierre Bect and Alain Planchais (ST Microelectronics)
Abstract Abstract
This article addresses the problem of reticle allocation in a stocker of an existing photolithography workshop of a 200 mm semiconductor wafer manufacturing facility. A reticle stocker generally consists of two internal storage zones: the retpod, where reticles are stored with pods and have short retrieval times, and the carousel, a bare reticle stocker with longer retrieval time. The reticle is an auxiliary resource in photolithography workshop operations. Thus, if the right reticles are not stored in the right places in the reticle stocker, it can quickly become a bottleneck. The purpose of the article is to determine the right reticles to store inside the right place of the reticle stocker. This is a knapsack problem. Three heuristics considering the arrival of lots in the upstream steps of the photolithography workshop are proposed and tested on real instances.
pdf
MASM: Semiconductor Manufacturing
Factory Operations I
Chair: Michael Hassoun (Ariel University)
Multi-objective Optimization of a Sorting System
Arjan Smit, Jelle Adan, and Patrick C. Deenen (Eindhoven University of Technology)
Abstract Abstract
This paper addresses the optimization of a sorting system encountered in the semiconductor industry. The system consists of parallel sorting machines and a material handler to transport materials to, from and between machines. The problem is decomposed into multiple subproblems. For each of these subproblems heuristic methods are proposed. A discrete-event simulation is used to study the performance of the system using these heuristics. Application on two real-world cases shows that this heuristic approach can significantly decrease the makespan and cost, yielding practically feasible schedules.
pdf
A Simulation Optimization Approach for Managing Product Transitions in Multistage Production Lines
Atchyuta Manda (North Carolina State University), Karthick Gopalswamy (Walmart Labs), and Sara Shashaani and Reha Uzsoy (North Carolina State University)
Abstract Abstract
The ability to rapidly achieve and sustain high-volume, high-quality production (ramp-up) of new products is critical to success in most high-tech industries, especially semiconductors. In this work, we explore the problem of managing releases into a multi-stage production system as it transitions from an old product to a new one. We develop a solution using simulation optimization and present experiments to explore the impact of flow variability, variability in the arrival process of work to downstream stations, induced by the new product as it is debugged on the rest of the system. This work lays the foundation for developing production planning models during new production introductions using fab scale simulation models.
pdf
Heuristics for Order-Lot Pegging in Multi-Fab Settings
Lars Mönch (University of Hagen), Liji Shen (WHU - Otto Beisheim School of Management), and John Fowler (Arizona State University)
Abstract Abstract
IIn this paper, we study order-lot pegging problems in semiconductor supply chains. The problem deals with assigning already released lots to orders and with planning wafer releases to fulfill orders if there are not enough lots. The objective is to minimize the total tardiness of the orders. We propose a mixed integer linear programming (MILP) formulation for this problem. Moreover, we design a simple heuristic based on list scheduling and a biased random key genetic algorithm (BRKGA). Computational experiments based on problem instances from the literature for the single-fab case and newly proposed instances for the multi-fab setting are conducted. The results demonstrate that the BRKGA approach is able to determine high-quality solutions in a short amount of computing time.
pdf
MASM: Semiconductor Manufacturing
Factory Operations II
Chair: Thomas Ponsignon (Infineon Technologies AG)
A Deep Reinforcement Learning Approach For Optimal Replenishment Policy In A Vendor Managed Inventory Setting For Semiconductors
Muhammad Tariq Afridi and Santiago Nieto-Isaza (Technische Universität München) and Hans Ehm, Thomas Ponsignon, and Abdelgafar Hamed (Infineon Technologies AG)
Abstract Abstract
Vendor Managed Inventory (VMI) is a mainstream supply chain collaboration model. Measurement approaches defining minimum and maximum inventory levels for avoiding product shortages and over-stocking are rampant. No approach undertakes the responsibility aspect concerning inventory level status, especially in semiconductor industry which is confronted with short product life cycles, long process times, and volatile demand patterns. In this work, a root-cause enabling VMI performance measurement approach to assign responsibilities for poor performance is undertaken. Additionally, a solution methodology based on reinforcement learning is proposed for determining optimal replenishment policy in a VMI setting. Using a simulation model, different demand scenarios are generated based on real data from Infineon Technologies AG and compared on the basis of key performance indicators. Results obtained by the proposed method show improved performance than the current replenishment decisions of the company.
pdf
Periodic Workload Control: A Viable Alternative For Semiconductor Manufacturing
Philipp Neuner, Stefan Haeussler, and Quirin Ilmer (University of Innsbruck)
Abstract Abstract
This paper analyzes a rule based workload control model applied to a scaled-down semiconductor simulation model. We compare two well established continuous order release models from the semiconductor domain, namely the Starvation Avoidance (SA) and the CONstant LOAD (ConLOAD) approach, with the COrrected aggregate Load Approach (COLA) which was originally developed for small and medium enterprises in make-to-order companies. The main difference between these order release approaches is that the former two (SA and ConLOAD) release orders continuously whereas the latter (COLA) releases orders at periodic intervals. Contrary to earlier research on order release models for semiconductor models we show that the periodic order release model outperforms the other two continuous mechanisms by yielding lower costs and better timing performance. Thus, this paper highlights that periodic rule based order release models are a viable alternative which was largely neglected in recent semiconductor literature.
pdf
Challenges Associated with Realization of Lot Level Fab out Forecast in a Giga Wafer Fabrication Plant
Georg Seidel (Infineon Technologies Austria AG), Ching Foong Lee and Aik Ying Tang (Infineon Technologies (Kulim) Sdn. Bhd.), Wolfgang Scholl (Infineon Technologies Dresden GmbH), and Soo Leen Low and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd)
Abstract Abstract
In the semiconductor industry a reliable delivery forecast is helpful to optimize demand planning. Very often cycle time estimations for frontend, backend production, testing and transits are used to predict delivery times on product level and to determine when products have to be started to fulfill customer demands on time. Frontend production usually consumes a big portion of the cycle time of a product. Therefore a reliable cycle time estimation for a frontend production is crucial for the accuracy of the overall cycle time prediction. We compare two different methods to predict cycle times and delivery forecasts on product and lot level for a frontend production: a Big Data approach, where historical data is analyzed to predict future behavior, and a fab simulation model.
pdf
MASM: Semiconductor Manufacturing, Plenary
Wednesday Keynote
Chair: Lars Moench (University of Hagen)
Industry 3.5 as Hybrid Strategy empowered by AI & Big Data Analytics and Collaborative Research with Micron Taiwan for Smart Manufacturing
Chen-Fu Chien (National Tsing Hua University)
Abstract Abstract
The paradigm of global manufacturing is shifting as leading nations proposing next phase of industrial revolution for Industry 4.0 by Germany and reemphasizing the importance of advanced manufacturing such as AMP in USA. Driven by Moore’s Law, semiconductor manufacturing is one of the most complex industries for continuous migration of advanced technologies for manufacturing excellence. Micron Technology is a world leading producer of semiconductor memory and computer data storage that has established one of her largest manufacturing bases in Taiwan through a number of acquisitions of local fabs as well as investments of new fabs. Industry 3.5 was proposed as a hybrid strategy between the best practice of the existing Industry 3.0 and to-be Industry 4.0 to address fundamental objectives for smart manufacturing while employing artificial intelligence and big data analytics as means objectives for manufacturing intelligence solutions. This speech will introduce Industry 3.5 and use a number of empirical studies under the existing infrastructure for validation. Furthermore, collaborative research with Micron for smart manufacturing will be used to illustrate our continuous efforts employing artificial intelligence, big data analytics, optimization, and intelligent decision for smart manufacturing and digital transformation. This talk will conclude with discussions of the implications of Industry 3.5 as alternative for Industry 4.0 to empower humanity in the ongoing industrial revolution.
pdf
MASM: Semiconductor Manufacturing
Simulation Methodology I
Chair: Denny Kopp (University of Hagen)
First Steps Towards Bridging Simulation And Ontology To Ease The Model Creation On The Example Of Semiconductor Industry
Nour Ramzy, Christian James Martens, Shreya Singh, Thomas Ponsignon, and Hans Ehm (Infineon Technologies AG)
Abstract Abstract
With diverse product mixes in fabs, high demand volatility, and numerous manufacturing steps spread across different facilities, it is impossible to analyze the combined impacts of multiple operations in semiconductor supply chains without a modeling tool like simulation. This paper explains how ontologies can be used to develop and deploy simulation applications, with interoperability and knowledge sharing at the semantic level. This paper proposes a concept to automatically build simulations using ontologies and its preliminary results. The proposed approach seeks to save time and effort expended in recreating the information for different use cases that already exists elsewhere. The use case provides first indications that with an enhancement of a so-called Digital Reference with Semantic Web Technologies, modeling and simulation of semiconductor supply chains will not only become much faster but also require less modeling efforts because of the reusability property.
pdf
An Agent-Based Simulation Model With Human Resource Integration For Semiconductor Manufactoring Facility
Joris Werling and Claude Yugma (Ecole des Mines de Saint-Etienne); Ameur Soukhal (Polytech Tours, Université de Tours); and Thierry Mohr (STMicroelectronics)
Abstract Abstract
This paper presents an agent-based simulation modeling of a real workshop of a semiconductor factory. One of the main characteristics of the factory is the strong involvement of human resources in production operations. The purpose is to build a simulation tool to help decision-makers anticipate production issues. To do so, we defined hypotheses and built a model integrating operator characteristics. The simulation results are based on real industrial data from the company. We foresee how the evolution of the production is difficult to handle. Due to the complexity of the outputs, we introduce a new metric and used it to analyze the results. Finally, we release a conclusion and perspectives in which we give our opinion on the building and use of such a complex model in an industrial environment.
pdf
Integrating Critical Queue Time Constraints into SMT2020 Simulation Models
Denny Kopp (University of Hagen); Michael Hassoun (Ariel University); Adar Kalir (Intel Corporation, Ben-Gurion University); and Lars Moench (University of Hagen)
Abstract Abstract
In this paper, we study the impact of critical queue time (CQT) constraints in semiconductor wafer fabrication facilities (wafer fabs). Process engineers impose CQT constraints that require wafers to start a subsequent operation within a given time window after a certain operation is completed to prevent native oxidation and contamination effects on the wafer surface. We equip dataset 2 of the SMT2020 testbed with production control logic to avoid CQT constraint violations. Therefore, two different CQT-aware dispatching rules and a combination of a lot stopping strategy with these rules are proposed. The effect of the production control strategies is investigated by means of a simulation study. We show that the number of CQT violations can be reduced without large deteriorations of global performance measures such as cycle time and throughput.
pdf
MASM: Semiconductor Manufacturing
Simulation Methodology II
Chair: Leon McGinnis (Georgia Institute of Technology)
Using Accuracy Measurements To Evaluate Simulation Model Simplification
Igor Stogniy (Technische Universität Dresden) and Wolfgang Scholl (Infineon Technologies Dresden GmbH)
Abstract Abstract
Infineon Technologies Dresden has long used discrete event simulation to optimize production planning for its fully automated front end manufacturing lines. There are needs to reduce maintenance efforts, to increase transparency for validation and verification, and to improve flexibility for the fast simulation of scenarios with a focus on qualitative statements. Less detailed models will be utilized where components could be omitted. This paper considers a simplification of the process flows through operation substitution for constant delays. The main idea is to consider the accuracy measurements used to evaluate simplification. It is shown that standard accuracy measurements (e.g. mean absolute error, correlation coefficient, etc.) produce rather poor performance. It is suggested instead to use measurements based on lot cycle time distributions (e.g. goodness-of-fit tests). Nine types of simplification sieve functions were analyzed, with analyses based on the MIMAC dataset 5 model.
pdf
An Analysis-agnostic System Model of the Intel Minifab
Leon McGinnis (Georgia Institute of Technology)
Abstract Abstract
What if, instead of trying to model production systems using a simulation language, we first formally specified them by creating an analysis-agnostic system model (AASM) which could be developed collaboratively with the stakeholders and then be used as a simulation model requirements document, the benchmark for simulation model verification and a valuable asset for validation? This paper demonstrates an approach for creating an AASM for a wafer fab, using as the demonstration vehicle a famously simple yet intriguingly complex case study, the Intel Minifab case developed by Karl Kempf 25 years ago.
pdf
Simulation-based Digital Twin of a Complex Shop-Floor Logistics System
Dávid Gyulai and Júlia Bergmann (Insitute for Computer Science and Control), Attila Lengyel (Western Digital Corporation), and Botond Kádár and Dávid Czirkó (EPIC InnoLabs Ltd.)
Abstract Abstract
Digital analytics tools have been at the forefront of innovation in manufacturing industry in recent years. To keep pace with the demands of industrial digitization, companies seek opportunities to streamline processes and enhance overall efficacy, opting to replace conventional engineering tools with data-driven models. In a high-tech factory, detailed data is collected about the products, processes, and assets in near-real time, providing a basis to build trustworthy analytical models. In this paper, a novel discrete-event simulation (DES) model is proposed for the detailed representation of a complex shop-floor logistics system, employing automated robotic vehicles (AGV). The simulation model is applied to test new AGV management policies, involving both vehicle capacity planning and dispatching decisions. In order to illustrate the usefulness of the model and the effectiveness of the selected policy, numerical results of a case-study are presented, in which the selected policy was realized in a real manufacturing environment.
pdf
MASM: Semiconductor Manufacturing
Panel
Chair: John Fowler (Arizona State University)
Scheduling and Simulation in Waferfabs: Competitors, Independent Players or Amplifiers?
Peter Lendermann (D-SIMLAB Technologies Pte Ltd); Stephane Dauzère-Pérès (Mines Saint-Étienne, Univ Clermont Auvergne); Leon McGinnis (Georgia Institute of Technology); Lars Mönch (University of Hagen); Tina O'Donnell (Seagate Technology); Georg Seidel (Infineon Technologies Austria AG); and Philippe Vialletelle (ST Microelectronics)
Abstract Abstract
This panel will discuss the inherent conflict between the application of (Discrete-Event) Simulation and Scheduling techniques to manage and optimise capacity and material flow in Semiconductor Frontend Manufacturing (wafer fabrication). Representatives from both industry and academia will describe advantages and shortcomings of the respective techniques, with a specific focus on challenges arising from the recent and anticipated future evolution of the nature of such manufacturing environments, and suggest solution approaches as well as research issues that need to be addressed.
pdf
MASM: Semiconductor Manufacturing
Quality
Chair: Stephane Dauzère-Pérès (École Nationale Supérieure des Mines de Saint-Étienne, BI Norwegian Business School)
Interpretable Anomaly Detection for Knowledge Discovery in Semiconductor Manufacturing
Mattia Carletti, Marco Maggipinto, Alessandro Beghi, and Gian Antonio Susto (University of Padova) and Natalie Gentner, Yao Yang, and Andreas Kyek (Infineon Technologies AG)
Abstract Abstract
Machine Learning-based Anomaly Detection approaches are efficient tools to monitor complex processes. One of the advantages of such approaches is that they provide a unique anomaly indicator, a quantitative index that captures the degree of 'outlierness' of the process at hand considering possibly hundreds or more variables at the same time, the typical scenario in semiconductor manufacturing. One of the drawbacks of such approaches is that Root Cause Analysis is not guided by the system itself. In this work, we show the effectiveness of a method, called DIFFI, to equip Isolation Forest, one of the most popular Anomaly Detection algorithms, with interpretability traits that can help corrective actions and knowledge understanding. Such approach is validated on real world semiconductor manufacturing data related to a Chemical Vapor Deposition process.
pdf
Dynamic Sampling For Risk Minimization In Semiconductor Manufacturing
Etienne Le Quere (Soitec, Ecole des Mines de Saint-Etienne); Stéphane Dauzère-Pérès and Karim Tamssaouet (Ecole des Mines de Saint-Etienne); and Cédric Maufront and Stéphane Astie (Soitec)
Abstract Abstract
To control the quality of their processes, manufacturers perform measurement operations on their products. In semiconductor manufacturing, measurement capacity is limited because metrology tools are expensive, thus only a limited number of lots of products can be measured. Selecting the set of lots to control to minimize risk is called sampling. In this paper, the objective is to minimize the number of wafers at risk, i.e. the number of wafers produced on a machine between two lots that are controlled. The problem can be modeled as the maximization of a submodular set function subject to different capacity constraints. The resulting problems, which are NP-hard, can be modeled as integer linear programs. Computational experiments on industrial instances show that the integer linear programs solves the problem to the optimal, and that standard heuristics have approximation ratios that are good enough for industrial implementation.
pdf
Enhancing Scalability of Virtual Metrology: A Deep Learning-based Approach for Domain Adaptation
Natalie Gentner (University of Padova, Infineon Technologies AG); Mattia Carletti (University of Padova); Andreas Kyek (Infineon Technologies AG); Gian Antonio Susto (University of Padova); and Yao Yang (Infineon Technologies AG)
Abstract Abstract
One of the main challenges in developing Machine Learning-based solutions for Semiconductor Manufacturing is the high number of machines in the production and their differences, even when considering chambers of the same machine; this poses a challenge in the scalability of Machine Learning-based solutions in this context, since the development of chamber-specific models for all equipment in the fab is unsustainable. In this work, we present a domain adaptation approach for Virtual Metrology (VM), one of the most successful Machine Learning-based technology in this context. The approach provides a common VM model for two identical-in-design chambers whose data follow different distributions. The approach is based on Domain-Adversarial Neural Networks and it has the merit of exploiting raw trace data, avoiding the loss of information that typically affects VM modules based on features. The effectiveness of the approach is demonstrated on real-world Etching.
pdf
MASM: Semiconductor Manufacturing
Planning
Chair: Hans Ehm (Infineon Technologies AG)
Modelling And Mathematical Optimization For Capacity Planning Of A Semiconductor Wafer Test Module
Julia Siess (University of Regensburg) and Hermann Gold and Thomas Ponsignon (Infineon Technologies AG)
Abstract Abstract
This paper focuses on scheduling and capacity planning problems in semiconductor wafer test. The planning of wafer test as the final stage of the semiconductor frontend manufacturing process is very complex due to many uncertain factors. Yet the processes and system allocation are not fully automated at Infineon Technologies AG in Regensburg, but partly involved with manual handling. For this reason, a mathematical optimization program has been developed to compute a realistic delivery plan including a machine allocation plan. Various data on internal resources and dedication matrices were collected by specialized departments and serve as a basis for optimization, which was carried out with IBM OPL CPLEX Optimization studio using mixed-integer programming. The focus was on the optimization of delivery due date at the lowest possible costs in terms of setup and capacity.
pdf
Maintenance With Production Planning Constraints In Semiconductor Manufacturing
Alexandre MORITZ (Mines Saint-Étienne, Univ Clermont Auvergne; STMicroelectronics); Stéphane DAUZERE-PERES (Mines Saint-Étienne, Univ Clermont Auvergne; BI Norwegian Business School); Oussama BEN-AMMAR (Mines Saint-Étienne, Univ Clermont Auvergne; Mines Saint-Étienne); and Philippe VIALLETELLE (STMicroelectronics)
Abstract Abstract
In semiconductor manufacturing, as in most manufacturing contexts, preventive maintenance is required to avoid machine failures and to ensure product quality. In this paper, we are interested in optimally planning maintenance operations given a production plan that must be satisfied. Two Integer Linear Programming models are proposed that aim at completing as many maintenance operations as possible and as late as possible, while respecting their deadlines and the capacity constraints on machines. Computational experiments on industrial data are presented and discussed.
pdf
Characterizing Customer Ordering Behaviors In Semiconductor Supply Chains With Convolutional Neural Networks
Marco Ratusny, Alican Ay, and Thomas Ponsignon (Infineon Technologies AG)
Abstract Abstract
Advancements in the semiconductor industry have resulted in the need for extracting vital information from vast amounts of data. In the operational processes of demand planning and order management, it is important to understand customer demand data due to its potential to provide insights for managing supply chains. For this purpose, customer ordering behaviors are visualized in the form of two-dimensional heat maps. The goal is to classify the customers into predefined ordering patterns on the example of a semiconductor manufacturing, namely Infineon Technologies. Therefore, a convolutional neural network is used. By classifying the customers into preselected ordering patterns, a better understanding on how the customer demand develops over time is achieved. The results show that customers have a certain ordering pattern, but their behavior can be meaningfully classified only to a certain extend due to unidentified behaviors in the data. Further research could identify additional ordering patterns.
pdf
MASM: Semiconductor Manufacturing
Scheduling
Chair: Andy Ham (Liberty University)
A Simulation-based Sequential Search Method for Multi-Objective Scheduling Problems of Manufacturing Systems
Je-Hun Lee, Young Kim, and Yun Bae Kim (Sungkyunkwan University); Byung-Hee Kim and Gu-Hwan Chung (VMS Solutions); and Hyun-Jung Kim (KAIST)
Abstract Abstract
A scheduling method based on a combination of dispatching rules is often used in dynamic and flexible manufacturing systems to consider changing production environments. A weighted sum method, which assigns weights to dispatching rules and selects the job that has the largest weighted sum as the next job, is frequently used in LCD or semiconductor manufacturing systems. The weights of dispatching rules in each process stage are determined by fab engineers and adjusted periodically to reflect the current state of the system. Fab engineers choose appropriate weights based on their experiences to improve multiple objectives, such as maximization of throughput and minimization of setup times simultaneously. In this study, we propose a systematic sequential search method for dispatching rule weights to provide Pareto-front solutions. The proposed method divides a search space into sub-spaces with decision tree methods generated for each objective and also uses surrogate models to estimate objective values.
pdf
Advanced Production Scheduling in a Seagate Technology Wafer Fab
Georgios M. Kopanos, Dennis Xenos, and Slava Andreev (Flexciton Ltd) and Tina O’Donnell and Sharon Feely (Seagate Technology)
Abstract Abstract
This work focuses on the highly complicated scheduling problem in wafer fabs. We first provide insights on the broader impact of high quality scheduling decisions in semiconductor industries, and then we discuss traditional heuristic-based scheduling practices versus our mathematical optimization approach. The comparison between a new proposed scheduling technology against the current simulation scheduler at Seagate shows significant improvements in performance through a benchmark study that involves nine historical datasets of the metrology toolsets in the Seagate Springtown. On average, our schedules report: a significant reduction of more than 43% in cycle times for high-priority wafers, a reduction of about 9% in total cycle times, and a 7% increase in throughput; compared to SimModel schedules. The large improvements on these schedule metrics are mainly due to the more balanced allocation of wafers to machines and the better batching formations that exploit effectively wafers’ release dates and priorities.
pdf
Integrated Scheduling of Jobs, Reticles, Machines, Amhs and Arhs in a Semiconductor Manufacturing
Andy Ham (North Carolina A&T State University), Myoung-Ju Park (Kyung Hee University), Ho-Jun Shin and Si-Young Choi (CSPI), and John W. Fowler (Arizona State University)
Abstract Abstract
This paper studies simultaneous scheduling of production and material transfer in the semiconductor photolithography area. In particular, jobs are transferred by a material handling system that employees a fleet of vehicles. Reticles serving as an auxiliary resource are also transferred from one place to another by a different set of vehicles. The extremely complex scheduling problem that includes jobs, reticles, machines, and two different sets of vehicles is (apparently) studied for the first time. A novel constraint programming model is proposed.
pdf
Military Applications and Homeland Security
Track Coordinator - Military Applications and Homeland Security: Nathaniel Bastian (Joint Artificial Intelligence Center, Department of Defense), Andrew Hall (United States Military Academy, Army Cyber Institute)
Military Applications and Homeland Security, Plenary
Military Keynote
Chair: Nathaniel Bastian (Joint Artificial Intelligence Center, Department of Defense)
Combining AI with M&S to Meet Emerging Military Challenges
Peter Schwartz (MITRE Corporation)
Abstract Abstract
The U.S. is returning to a state of great power competition. The U.S. military must once again contend with near-peer adversaries that can bring to bear advanced weapon systems that are used in coordination with diplomatic, information, military, and economic (DIME) instruments of national power. In response to these challenges, the U.S. military is turning to new concepts of warfare such as Multi-Domain Operations (MDO) and Joint All-Domain Command and Control (JADC2). These concepts seek to orchestrate capabilities more tightly across domains (land, air, maritime, space, and cyberspace) as a means to converge effects rapidly and dynamically. This approach to warfare can provide U.S. commanders with a greater variety of options while presenting an adversary with multiple simultaneous dilemmas; however, it can also present U.S. commanders and their staffs with a far more complex battlespace and much shorter planning and decision timelines than they have faced in the past.
The U.S. Department of Defense is looking to artificial intelligence (AI) and machine learning (ML) as potential technologies to support the execution of MDO and JADC2. AI and ML are often combined with models and simulations (M&S) to provide enhanced capabilities. This talk will present different configurations that combine AI/ML with M&S and discuss their potential military applications. It will conclude with a presentation of a prototype course of action (COA) analysis tool that has been developed for the Army, including the specific way this tool combines AI with M&S and future work that will enable it to better support MDO and JADC2.
pdf
Military Applications and Homeland Security
Simulation-Based Approaches for Defense Training and Military Workforce Modeling
Chair: Nathaniel Bastian (Joint Artificial Intelligence Center, Department of Defense)
Low Altitude Air Defense Training System
John R. Surdu, Dirk Harrington, Jason Black, Tony Lynch, Wyatt Schmitz, and Jordan Bracken (Cole Engineering Services, Inc.)
Abstract Abstract
The Army has a significant training gap for Stinger gunners and teams. In particular, there is no solution that enables Stinger teams to get credit for successful engagements in large force-on-force exercises if the aircraft are not equipped with MILES detectors, which is apparently usually the case. This paper describes the development of a surrogate Stinger missile that facilitates both home station and deployed force-on-force training. The effort described in this paper resulted in a single device that addresses the training previously addressed by the three other devices, all of which are aging and often irreplaceable. This paper describes the design of the Low Altitude Air Defense Training System (LAADTS), the implementation of the prototype and results, and the future work.
pdf
Workforce Populations: Empirical versus Markovian Dynamics
Robert M. Bryce and Jillian Anne Henderson (Government of Canada)
Abstract Abstract
Workforce populations are often modeled under a memory-free attrition assumption. This simple theoretical model, which corresponds to an exponential survival time distribution, allows population trajectories to be forecast, for example, by using a discrete time Markov model or the related differential system. However, in practice the distribution of survival times for a given population is often poorly described by an exponential. Here we present a study where different populations in the Canadian Armed Forces are considered. We contrast empirical survival time distributions with the matched exponential, and find distributions ranging from being close to exponential (e.g., Reserve Force) to distinctly non-exponential (e.g., Regular Force). We perform numerical experiments to determine how population dynamics diverge from the assumed Markovian dynamics, finding moderate error levels for the populations studied. On a coarse level the Markovian assumption appears remarkably valid, but with sufficient error (ca. 5–10%) to warrant caution.
pdf
Methods for Estimating Incidence Rates and Predicting Incident Numbers in Military Populations
Stephen Okazawa (Defence Research and Development Canada)
Abstract Abstract
Monitoring of the health of military populations and developing effective personnel management plans relies on the ability to measure and predict the incidence of important events such as attrition, training failures, promotions and transfers between groups. Incidence rates are widely relied on to report the prevalence of these events and for modelling to predict future events. However, calculating and using incidence rates in real-world scenarios is not straightforward, and challenges are frequently encountered. This paper provides a detailed mathematical development of equations that define incidence rates, Bayesian techniques for estimating rates based on the available evidence and quantifying how certain the estimate is, and a beta-binomial model for predicting the variation in future event numbers. These methods do not require significant additional effort or resources to apply in typical military workforce modelling applications, but produce meaningful improvements in the depth and accuracy of the analysis.
pdf
Military Applications and Homeland Security
Emerging Techniques for Rocket Guidance Simulation, MSaaS and DODAS with RDBMS
Chair: Andrew Hall (Army Cyber Institute, United States Military Academy)
A Neural Network for Sensor Hybridization in Rocket Guidance
Raul de Celis, Pablo Solano-Lopez, and Luis Cadarso (Universidad Rey Juan Carlos)
Abstract Abstract
Improving accuracy is cornerstone for ballistic rockets. Using inertial navigation systems and Global Navigation Satellite Systems (GNSS), accuracy becomes independent of range. However, during the terminal phase of flight, when movement is governed by non-linear and highly changing forces and moments, guidance strategies based on these systems provoke enormous errors in attitude and position determination. Employing additional sensors, which are independent of cumulative errors and jamming, such as the quadrant photo-detector semi-active laser, can mitigate these effects. This research presents a new non-linear hybridization algorithm to feed navigation and control systems, which is based on neural networks. The objective is to accurately predict the line of sight vector from multiple sensors measurements. Non-linear simulations based on real flight dynamics are used to train the neural networks. Simulation results demonstrate the performance of the presented approach in a 6-DOF simulation environment showing high accuracy and robustness against parameter uncertainty.
pdf
ArTIC-M&S: An Architecture for TOSCA-based Inter-Cloud Modeling and Simulation
Paolo Bocciarelli (University of Rome Tor Vergata), Andrea D'Ambrogio (University of Roma TorVergata), and Umut Durak (German Aerospace Center (DLR))
Abstract Abstract
Modeling & Simulation (M&S) techniques have proven their effectiveness for several intended uses, from complex systems analysis to innovative training activities.
The emerging M&S-as-a-Service (MSaaS) paradigm deals with the adoption of service-orientation and cloud computing to ease the development and provision of M&S applications.
Due to its relevance for the military domain, the NATO MSG-164 is investigating how the MSaaS potential can be exploited to support NATO objectives.
In this context, this work proposes ArTIC-MS, a MSaaS architecture that aims at investigating innovative approaches to ease the building of inter-cloud MSaaS applications.
ArTIC-MS's main objective is to provide an effective interoperability among M&S services provided by different nations to seamlessly build complex MSaaS applications. Specifically, the work addresses the use of the TOSCA (Topology and Orchestration Specification for Cloud Applications) standard, and also discusses how ArTIC-MS may cope with the orchestration of M&S services available on non-TOSCA infrastructures.
pdf
Advancing Self-healing Capabilities in Interconnected Micgrogrids via Dynamic Data Driven Application Systems with Relational Database Management
Abdurrahman Yavuz, Joshua M. Darville, and Nurcin Celik (University of Miami); Jie Xu and Chun-Hung Chen (George Mason University); and Brent Langhals and Ryan Engle (Air Force Institute of Technology)
Abstract Abstract
A microgrid is an interdependent electrical distribution system containing renewable energy sources, local demand and a coupled connection to the main grid. A very appealing feature of a microgrid is its capability to self-heal from disruptions, which is made even more viable with the emergence of interconnected collaborative microgrids. In this study, we present a dynamic data driven application system framework that integrates a relational database management system (RDBMS) to advance self-healing capabilities among interconnected microgrids. A RDBMS facilitates access to various sensors in the microgrid for fast abnormality detection and for determining the optimal self-healing action to implement. We build an agent-based simulation model (ABM) for three self-healing interconnected microgrids. Using the ABM, we compare self-healing operations of microgrids with and without an RDBMS. Simulation results show that an RDBMS may lead to faster response time and thus advance self-healing capabilities of interconnected microgrids.
pdf
Military Applications and Homeland Security
Managing and Supporting Projects through the Lifecycle
Chair: Matthew Dickinson (Systecon North America); Andrew Hall (Army Cyber Institute, United States Military Academy)
Modeling and Simulation: Balancing Performance, Schedule, and Cost
Paul Brown (Systecon North America) and Courtney Kawazoe and Alex Nguyen (Naval Surface Warfare Center)
Abstract Abstract
Given the rapid technological advances realized in the defense industry, asymmetric threats present new challenges to the US Navy. Directed Energy (DE), a rapid prototyping, experimentation and demonstration (RPED) initiative seeks to develop and deliver advanced laser capabilities to the fleet to mitigate the newly discovered capability gaps. DE programs will utilize a wholeness approach to minimize excess spending, keep a tight schedule, and meet high readiness requirements all at best value with analysis conducted in the early stages of its life. Monitoring readiness of the DE programs includes tracking metrics such as operational availability and mission effectiveness. Evaluating DE design performance during preliminary design reviews, as well as throughout the acquisition milestones, allows NAVSEA the opportunity to make informed trade decisions in the design and production phases. Capturing design trades early in the system lifecycle will both increase Operational Availability as well as decrease Total Ownership Cost.
pdf
Product Supportability through Lifecyle Modeling and Simulation
Justin Woulfe (Systecon North America) and Magnus Andersson (Systecon AB)
Abstract Abstract
Current changes in DoD budgeting processes and in the constraints on available funding have resulted in inadequate support for our warfighter’s needs. The decision environment evolves into a key question impacting our warfighter capabilities: How should the funding be distributed to achieve the optimal balance between readiness, performance and cost? This paper outlines the fundamentals of successful Product Life Cycle Management, a method to monitor systems towards fulfilling the operational needs at the lowest possible Total Ownership Cost (TOC). The paper discusses critical decision points in different phases of the system’s life cycle and suggests an approach to use modelling and simulation tools to answer key questions and provide the required decision support.
pdf
Model Uncertainty and Robust Simulation
Track Coordinator - Model Uncertainty and Robust Simulation: Canan Gunes Corlu (Boston University), Enlu Zhou (Georgia Institute of Technology)
Model Uncertainty and Robust Simulation
New Advances in Simulation Optimization
Chair: Henry Lam (Columbia University)
Context-dependent Ranking and Selection under a Bayesian Framework
Best Contributed Theoretical Paper - Finalist
Haidong Li (Peking University), Henry Lam (Columbia University), Zhe Liang (Tongji University), and Yijie Peng (Peking University)
Abstract Abstract
We consider a context-dependent ranking and selection problem. The best design is not universal but depends on the contexts. Under a Bayesian framework, we develop a dynamic sampling scheme for context-dependent optimization (DSCO) to efficiently learn and select the best designs in all contexts. The proposed sampling scheme is proved to be consistent. Numerical experiments show that the proposed sampling scheme significantly improves the efficiency in context-dependent ranking and selection.
pdf
Confidence Intervals and Regions for Quantiles Using Conditional Monte Carlo and Generalized Likelihood Ratios
Lei Lei (Chongqing University), Christos Alexopoulos (Georgia Institute of Technology), Yijie Peng (Peking University), and James Wilson (North Carolina State University)
Abstract Abstract
This article develops confidence intervals (CIs) and confidence regions (CRs) for quantiles based on independent realizations of a simulation response. The methodology uses a combination of conditional Monte Carlo (CMC) and the generalized likelihood ratio (GLR) method. While batching and sectioning methods partition the sample into nonoverlapping batches, and construct CIs and CRs by estimating the asymptotic variance using sample quantiles from each batch, the proposed techniques directly estimate the underlying probability density function of the response. Numerical results show that the CIs constructed by applying CMC, GLR, and sectioning lead to comparable coverage results, which are closer to the targets compared with batching alone for relatively small samples; and the coverage rates of the CRs constructed by applying CMC and GLR are closer to the targets than both sectioning and batching when the sample size is relatively small and the number of probability levels is relatively large.
pdf
Optimal Switching in a Dynamic, Stochastic, Operating Environment
Byunghee Choi (Lebanon Valley Colllege) and Robert D. Weaver (Pennsylvania State University)
Abstract Abstract
The value of flexibility in operations for hydroelectric power plants is apparent from their investment in multiple generators with variable intensity of operation. This flexibility supports rapid response to stochastic processes. It follows that hydroelectricity generation often competes for water flow with downstream water flow demands including irrigation for agriculture, urban water use, flood risk mitigation, or ecosystem services. As each of these demands follow stochastic processes that imply asynchronous demand for water flow causing a conflict between satisfying irrigation or other water use and electricity demands. This study presents a dynamic model of hydroelectric generation to analyze the dynamics of optimal water allocation for irrigation and electricity generation within our generalized framework of optimal switching. Our main contribution is to solve the problem numerically by relying on Monte-Carlo simulations. Moreover, the method is novel and allows generalization of the problem not previously possible.
pdf
Model Uncertainty and Robust Simulation
Simulation Optimization under Uncertainty
Chair: Angel A. Juan (IN3-Open University of Catalonia (UOC), IN3)
A Simheuristic Algorithm for Reliable Asset and Liability Management under Uncertainty Scenarios
Christopher Bayliss, Armando Miguel Nieto, Marti Serra, Mariem Gandouz, and Angel A. Juan (Universitat Oberta de Catalunya)
Abstract Abstract
The management of assets and liabilities is of critical importance for insurance companies and banks. Complex decisions need to be made regarding how to assign assets to liabilities in such a way that the overall benefit is maximised over a time horizon. In addition, the risk of not being able to cover the liabilities at any given time must be kept under a certain threshold level. This optimisation challenge is known in the literature as the asset and liability management (ALM) problem. In this work, we propose a biased-randomised (BR) algorithm to solve a deterministic version of the ALM problem. Firstly, we outline a greedy heuristic. Secondly, we transform it into a BR algorithm by employing skewed probability distributions. The BR algorithm is then extended into a simheuristic by incorporating Monte-Carlo simulation to deal with the stochastic version of the problem.
pdf
On the Scarcity of Observations when Modelling Random Inputs and the Quality of Solutions to Stochastic Optimisation Problems
Canan G. Corlu (Boston University), Javier Panadero (Universitat Oberta de Catalunya), Bhakti Stephan Onggo (University of Southampton), and Angel A. Juan (Universitat Oberta de Catalunya)
Abstract Abstract
Most of the literature on supply chain management assumes that the demand distributions and their parameters are known with certainty. However, this may not be the case in practice since decision makers may have access to limited amounts of historical demand data only. In this case, treating the demand distributions and their parameters as the true distributions is risky, and it may lead to sub-optimal decisions. To demonstrate this, this paper considers an inventory-routing problem with stochastic demands, in which the retailers have access to limited amounts of historical demand data. We use simheuristic method to solve the optimisation problem and investigate the impact of the limited amount of demand data on the quality of the simheuristic solutions to the underlying optimisation problem. Our experiment illustrates the potential impact of input uncertainty on the quality of the solution provided by a simheuristic algorithm.
pdf
Calibrating Input Parameters via Eligibility Sets
Yuanlu Bai and Henry Lam (Columbia University)
Abstract Abstract
Reliable simulation analysis requires accurately calibrating input model parameters. While there has been a sizable literature on parameter calibration that utilizes directly observed data, much less attention has been paid to the situation where only output-level data are available to justify input parameter choices. This latter problem, which is known as the inverse problem and relates to the model validation literature, involves several new challenges, one of which is the non-identifiability issue. In this paper we introduce the concept of eligibility set to bypass non-identifiability, by relaxing the need of consistent estimation to obtaining bounds on the input parameter values. We reason this concept from the worst-case notion in robust optimization, and demonstrate how to compute eligibility set via empirical matching between the simulated and the real outputs. We substantiate our procedure with theoretical error analysis and validate its effectiveness via numerical experiments.
pdf
Model Uncertainty and Robust Simulation
Data-Driven Simulation Optimization
Chair: Eunhye Song (Pennsylvania State University)
Joint Resource Allocation for Input Data Collection and Simulation
Jingxu Xu (University of California, Berkeley); Peter W. Glynn (Stanford University); and Zeyu Zheng (University of California, Berkeley)
Abstract Abstract
Simulation is often used to evaluate and compare performances of stochastic systems, where the underlying stochastic models are estimated from real-world input data. Collecting more input data can derive closer-to-reality stochastic models while generating more simulation replications can reduce stochastic errors. With the objective of selecting the system with the best performance, we propose a general framework to analyze the joint resource allocation problem for collecting input data and generating simulation replications. Two commonly arised features, correlation in input data and common random numbers in simulation, are jointly exploited to save cost and enhance efficiency. In presence of both features, closed-form joint resource allocation solutions are given for the comparison of two systems.
pdf
Statistical Inference for Approximate Bayesian Optimal Design
Prateek Jaiswal and Harsha Honnappa (Purdue University)
Abstract Abstract
This paper studies a generic Bayesian optimal design formulation with chance constraints, where the decision variable lies in a separable, reflexive Banach space. This setting covers a gamut of simulation and modeling problems that we illustrate through two example problem formulations. The posterior objective cannot be computed, in general, and it is necessary to use approximate Bayesian inference. Sampling-based approximate inference, however, introduces significant variance and, in general, leads to non-convex approximate feasible sets, even when the original problem is convex. In this paper, we use variational Bayesian approximations that introduce no variance and retain the convexity of the feasibility set, subject to easily satisfied regularity conditions on the approximate posterior, albeit at the expense of a much larger bias. Our main results, therefore, establish large sample asymptotic consistency of the optimal solutions and optimal value of this approximate Bayesian optimal design formulation.
pdf
Simulation Optimization Based Feature Selection, a Study on Data-driven Optimization with Input Uncertainty
Kimia Vahdat and Sara Shashaani (North Carolina State University)
Abstract Abstract
In machine learning, removing uninformative or redundant features from a dataset can significantly improve the construction, analysis, and interpretation of the prediction models, especially when the set of collected features is extensive. We approach this challenge with simulation optimization over a high dimensional binary space in place of the classic greedy search in forward or backward selection or regularization methods. We use genetic algorithms to generate scenarios, bootstrapping to estimate the contribution of the intrinsic and extrinsic noise and sampling strategies to expedite the procedure. By including the uncertainty from the input data in the measurement of the estimators' variability, the new framework obtains robustness and efficiency. Our results on a simulated dataset exhibit improvement over state-of-the-art accuracy, interpretability, and reliability. Our proposed framework provides insight for leveraging Monte Carlo methodology in probabilistic data-driven modeling and analysis.
pdf
Track Coordinator - Modeling Methodology: Rodrigo Castro (ICC-CONICET, Universidad de Buenos Aires), Gabriel Wainer (Carleton University)
Modeling Methodology
DEVS Modelling and Simulation: Applications
Chair: José Luis Risco-Martín (Complutense University of Madrid, Center for Computational Simulation)
Modelling Fog & Cloud Collaboration Methods on Large Scale
Khaldoon Al-Zoubi (Jordan University of Science & Technology, Carleton University) and Gabriel Wainer (Carleton University)
Abstract Abstract
Fog Computing is expected to decentralize clouds to improve users quality of service, in technologies like the Internet of Things, by pushing some computing onto mini-clouds (called Fogs) located close to end users. To enhance M&S with this concept, we have developed a complete Fog/Cloud collaboration methods to conduct simulation experiments: Users manipulate their experiments though nearby Fogs servers while M&S resources are dynamically discovered and allocated throughout the Fogs/Cloud. We have already built those methods using privately owned clouds. However, it was difficult in practice to study those methods scalability using real system setups. As a result, we present here the simulation model that we developed to mimic this real system in order to study the proposed collaboration methods on large scale. This model was validated with reference to the real system. The results have clearly shown the scalability of those proposed methods at the structure and coordination levels.
pdf
Energy Efficiency Evaluation of Parallel Execution of DEVS Models in Multicore Architectures
Guillermo G. Trabes (Carleton University), Veronica Gil Costa (Universidad Nacional de San Luis), and Gabriel Wainer (Carleton University)
Abstract Abstract
Complex models in science and engineering need better techniques to execute simulations efficiently. As we need high performance computers with many processors and memory to execute complex simulations faster, we face a problem with the consumption of energy. Therefore, we need new ways to define efficiency in simulations, measured not only in terms of computation time, but in terms of the amount of energy required to execute them. The Discrete-Event System Specification (DEVS) formalism, a well-known technique for modeling and simulation, includes simulation algorithms that allow running DEVS models in parallel computers. Nevertheless, no studies exist on the execution of DEVS simulations on parallel computers efficiently in terms of energy use. In this work, we show an energy efficiency evaluation of the execution of DEVS simulations on a shared-memory multicore architecture. The results presented show that executing in parallel can improve energy efficiency in these architectures.
pdf
A DEVS Simulation Algorithm Based on Shared Memory for Enhancing Performance
Román Cárdenas (Universidad Politécnica de Madrid, Carleton University); Kevin Henares (Universidad Complutense de Madrid); Patricia Arroba (Universidad Politécnica de Madrid, Center for Computational Simulation); Gabriel A. Wainer (Carleton University); and José L. Risco-Martín (Universidad Complutense de Madrid, Center for Computational Simulation)
Abstract Abstract
The Discrete EVent System Specification (DEVS) formalism provides a unified method to define any discrete-event system accurately. As the complexity of the system under study increases, the necessity of simulation engines with higher performance rises. In this research, we present a chained DEVS simulator, a DEVS-compliant, function-oriented simulation algorithm that exploits shared memory patterns to improve the performance of sequential and parallel simulations. We also illustrate the positive impact of this novel approach executing a set of DEVStone synthetic benchmarks and comparing a state-of-the-art simulation engine with an updated version that implements the chained algorithm. Results show that the chained simulator introduces up to 40% less synchronization overhead than the traditional simulation approach.
pdf
Modeling Methodology
DEVS Modelling and Simulation: Theory
Chair: Rodrigo Castro (Universidad de Buenos Aires, ICC-CONICET)
DEVS-Scripting: A Black-box Test Frame for DEVS Models
Matthew B. McLaughlin (Arizona State University, Fires Battle Lab) and Hessam S. Sarjoughian (Arizona State University)
Abstract Abstract
Experimental frames have been used in DEVS-based simulations to drive scenarios through injecting inputs and interpreting outputs. This design has traditionally called for separate models with distinct roles: generator, acceptor, and transducer. In certain controlled experiments such as model testing, sequential programming offers a simpler design with many benefits, specifically: code reduction, test case development throughput, and diagnostics for failed tests. This research offers a test framework that is derived from atomic DEVS and facilitates testing through scripting. The challenge for this research is to prove DEVS semantics are maintained when the experimental frame is tightly controlled by a script. Our solution uses a separate thread for this script and synchronizes program execution switching with a nest lock. Synchronization is key in showing that this design maintains DEVS semantics by nesting script code within the state transition functions of DEVS modeling components.
pdf
A New Simulation Algorithm for PDEVS Models with Time Advance Zero
Cristina Ruiz-Martín, Guillermo Trabes, and Gabriel Wainer (Carleton University)
Abstract Abstract
Discrete Event Systems Specification (DEVS) is a well-known formalism to develop models using the discrete event approach. One advantage of DEVS is a clear separation between the modeling and simulation activities. The user only needs to develop models and general algorithms execute the simulations. The PDEVS simulation protocol is a well-know and widely accepted algorithm to execute DEVS simulations. However, when events are scheduled with time advance equal to zero, this algorithm handles them sequentially. Events that occur at the same time are processed one after the other. This may result in unwanted simulation results. In this work, we propose a new algorithm that assures that the output bag of a model is transmitted only when all the outputs corresponding to a given simulation time have been collected.
pdf
Translating Process Interaction World View Models to DEVS: GPSS to Python(P)DEVS
Randy Paredis, Simon Van Mierlo, and Hans Vangheluwe (University of Antwerp)
Abstract Abstract
Different modeling languages are used for various Modeling and Simulation activities to build and/or describe complex systems. A subset of these languages use a discrete-event abstraction, adhering to a specific world view: either event scheduling, activity scanning, or process interaction. To study the relationship between these world views, and the semantics of their languages more closely, it is useful to create translations between them.
In this paper, we look at GPSS, a language in the process interaction world view, and describe a translation onto DEVS, a general-purpose event scheduling language. By specifying a translation that produces an equivalent DEVS model, we benefit from the advantages that DEVS offers,
including modular model definitions, scalable simulators, real-time execution, etc. We focus on building a working prototype for a relevant subset of GPSS blocks that cover a wide range of functionality, and demonstrate the approach on a representative example.
pdf
Modeling Methodology
Spatio-Temporal Simulation: Methods
Chair: Gabriel Wainer (Carleton University)
Extended Model Space Specification for Mobile Agent-based Systems to Support Automated Discovery of Simulation Models
Hai Le and Xiaolin Hu (Georgia State University)
Abstract Abstract
Automated discovery of simulation models is a different simulation modeling approach from the traditional approach, where simulation models are handcrafted by modelers. Our previous work developed an automated simulation modeling approach for mobile agent-based systems that allows automated search of candidate models based on a search space and desired simulation behaviors. This paper extends the model space specification from previous work to support an expanded search space for automated discovery of simulation models. The extended specification includes supporting user-defined properties to capture internal states and other properties of mobile agents and adding a new Activation component for behaviors so that priorities among multiple behaviors can be dynamically computed based on surrounding environment. The extended specification is demonstrated by supporting discovery of simulation models that are not in the previous search space.
pdf
Pragmatic Logic-based Spatio-temporal Pattern Checking in Particle-based Models
Andreas Ruscheinski, Anja Wolpers, Philipp Henning, Tom Warnke, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock)
Abstract Abstract
Particle-based simulation is a powerful approach for modeling systems and processes of entities interacting in continuous space. One way to validate a particle-based simulation is to check for the occurrence of spatio-temporal patterns formed by the particles, for example by statistical model checking. Whereas spatio-temporal logics for describing spatio-temporal patterns exist, they are defined on discrete rather than continuous space. We propose an approach to bridge this gap by automatically translating the output of continuous-space particle-based simulations into an input for discrete-space spatio-temporal logics. The translation is parameterized with information about relevant regions and their development in time. We demonstrate the utility of our approach with a case study in which we successfully apply statistical model-checking to a particle-based cell-biological model. A Java implementation of our approach is available under an open-source license
pdf
Composition of Geographic-based Component Simulation Models
William A. Boyd and Hessam S. Sarjoughian (Arizona State University)
Abstract Abstract
Separate simulation models (e.g., agent-based models) may depend on spatial data associated with geographic locations. Use of autonomous interaction models allows independent models to be composed into an aggregate model without alteration of the composed models. The Geographic Knowledge Interchange Broker (GeoKIB) is proposed as a mediator of spatial-temporal models. The GeoKIB regulates unidirectional interactions between composed models of the same type or not. Different input and output data types are supported depending on whether data transmission is passive or active. Synchronization of time-tagged input and output values is made possible via connections to shared simulation clocks. A spatial conversion algorithm transforms any two-dimensional geographic data map for another region of different map cell sizes and boundaries. A composition of a cellular automaton and an agent-based model is developed to demonstrate the proposed approach for spatially-based heterogeneous model composition with the GeoKIB.
pdf
Modeling Methodology
Spatio-Temporal Simulation: Applications
Chair: Khaldoon Al-Zoubi (Carleton University)
Anomalous Transport of Infectious Diseases in Structured Populations
Günter Schneckenreither (Vienna University of Technology)
Abstract Abstract
Structural properties of populations strongly influence dynamic transport and interaction such as the transmission of infectious diseases. In this paper a hierarchical block model is investigated, that schematically describes the configuration of social communities. It is shown that these models can produce super-diffusive spread and that the balance between intra- and inter-community contacts is a suitable factor for controlling the expression of this feature. The qualitative characteristics of epidemics simulated with this model are reproduced in numerical simulations of a basic fractional-in-space differential equation SIR model. Based on a stochastic formulation of the block structure, a preliminary connection between both models can be derived. The results of this paper confirm that certain structural characteristics of interacting populations can be simulated in aggregated models by employing fractional derivatives.
pdf
Performance and Soundness of Simulation: A Case Study based on a Cellular Automaton for In-Body Spread of HIV
Till Köster (University of Rostock), Philippe J. Giabbanelli (Miami University), and Adelinde M. Uhrmacher (University of Rostock)
Abstract Abstract
Due to the number of cells in a model, efficient simulation algorithms have received increasing attention.
In this paper, we focus on improving the efficiency of a cellular automaton model known as the `dos Santos' model for the Human Immunodeficiency Virus (HIV).
The synchronous updates in this model are driven by mostly deterministic transitions and a few highly probabilistic transitions.
This design can be exploited to create an efficient simulation algorithm.
We propose such an algorithm using an advanced scheme to generate random numbers (to efficiently perform probabilistic transitions) that efficiently leverages CPU vectorization.
We note several limitations in the model behavior and simulation outputs, which apply not only to the dos Santos design but also to the many synchronous HIV models inheriting its design.
pdf
Modeling Methodology
Multi-Formalism and Multi-Simulation
Chair: Adelinde Uhrmacher (University of Rostock)
Learning Rule-based Explanatory Models from Exploratory Multi-Simulation For Decision-Support Under Uncertainty
Brodderick Rodriguez and Levent Yilmaz (Auburn University)
Abstract Abstract
Exploratory modeling and simulation is an effective strategy when there are substantial contextual uncertainty and representational ambiguity in problem formulation. However, two significant challenges impede the use of an ensemble of models in exploratory simulation. The first challenge involves streamlining the maintenance and synthesis of multiple models from plausible features that are identified from and subject to the constraints of the research hypothesis. The second challenge is making sense of the data generated by multi-simulation over a model ensemble. To address both challenges, we introduce a computational framework that integrates feature-driven variability management with an anticipatory learning classifier system to generate explanatory rules from multi-simulation data.
pdf
Autonomous and Composable M&S System of Systems with the Simulation, Experimentation, Analysis and Testing (SEAT) Framework
Saurabh Mittal, Nick Kasdaglis, Lamar Harrell, Robert Wittman, John Gibson, and David Rocca (MITRE Corporation)
Abstract Abstract
A simulation System of System (SoS) comprised of various simulation systems may have dissimilar modeling paradigms. For performing autonomy research and development, various types of simulation systems need to be brought together to build a multi-domain virtual SoS wherein effects of autonomous entities/agents can be observed. While the distributed simulation community has solved the integrability challenge using standards like Distributed Interactive Simulation (DIS) or High Level Architecture (HLA), the model composability challenge is an open research problem. Many software/systems can now be made available as docker applications which can be readily plugged into existing simulation systems. However, the trust issues with such integration limit their usage as established systems cannot trust third party apps. This paper highlights some of the challenges with building a cloud-based simulation SoS, proposes an architecture framework using the concept of structural autonomy and leverages Modeling & Simulation as a fundamental key enabler for autonomy research.
pdf
Modeling the Water-energy Nexus for the Phoenix Active Management Area
Mostafa Fard and Hessam Sarjoughian (Arizona State University); Imran Mahmood (Nat. Univ. of Sciences & Technology); and Adil Mounir, Xin Guan, and Giuseppe Mascaro (Arizona State University)
Abstract Abstract
Phoenix, an Active Management Area in the desert Southwest US, is the 5th most populated city in the US. Scarce local groundwater and water transported from external resources must be managed in the presence of different types of energy sources. Local and regional decision-makers are faced with answering challenging questions on managing combined water and energy supply and demand for the short and long term. Prediction and planning for the interdependency of these entities can benefit from modeling the water and energy systems as well as their interactions with one another. In this paper, the integrated WEAP and LEAP tools and a hybrid modeling framework that externalizes their hidden linkage to an interaction model are described and compared using the Phoenix Active Management Area. Loose coupling enabled by interaction modeling is key for decision-policies that can govern the nexus of the water-energy system of systems.
pdf
Modeling Methodology
Model Specification Methodologies
Chair: Hessam Sarjoughian (Arizona State University)
Simulus: Easy Breezy Simulation in Python
Jason Liu (Florida International University)
Abstract Abstract
This paper introduces Simulus, a full-fledged open-source discrete-event simulator, supporting both event-driven and process-oriented simulation world-views. Simulus is implemented in Python and aspires to be a part of the Python's ecosystem supporting scientific computing. Simulus also provides several advanced modeling constructs to ease common simulation tasks (e.g., complex queuing models, inter-process synchronizations, and message-passing communications). Simulus also provides organic support for simultaneously running a time-synchronized group of simulators, either sequentially or in parallel, thereby allowing composable simulation of individual simulators handling different aspects of a target system, and enabling large-scale simulation running on parallel computers. This paper describes the salient features of Simulus and examines its major design decisions.
pdf
Online Risk Measure Estimation Via Natural Gradient Boosting
Xiaoting Cai (Shanghai University, School of Management); Yang Yang (The Chinese University of Hong Kong, Shenzhen; School of Data Science); and Guangxin Jiang (Harbin Institute of Technology, School of Management)
Abstract Abstract
Estimating risk measures of a portfolio with financial derivatives in real time is an important yet challenging task. Since it costs some time to conduct simulation experiments, traditional nested simulation methods may not be well applied. In this paper, we propose to use a natural gradient boosting (NGBoost) approach to estimate the conditional probability density function of the value (or the loss) of the portfolio under the framework of offline simulation online application, and then estimate the real-time risk measures based on the conditional probability density function. By training only one learning model, the NGBoost approach can obtain various real-time risk measures, which measures the risk from different aspects and provides a comprehensive understanding of risk. Numerical examples show the effectiveness of the NGBoost approach.
pdf
Conceptual Models in Simulation Studies: Making it Explicit
Pia Wilsdorf, Fiete Haack, and Adelinde M. Uhrmacher (University of Rostock)
Abstract Abstract
Conceptual models play an important role in conducting simulation studies. A formal or at least explicit specification of conceptual models is key for effectively exploiting them during simulation studies and thereafter, for interpreting and reusing the simulation results. However, the perception of conceptual models varies strongly and with it possible means for specification. A broad definition of the conceptual model, i.e., as a loose collection of early-stage products of the simulation study, holds the potential to unify existing definitions, but also poses specific challenges for specification. To approach these challenges, without claiming to be exhaustive, we identify a set of products, which includes research question, data, and requirements, and define relations and properties of these products. Based on a cell biological case study and a prototypical implementation, we show how the formal structuring of the conceptual model assists in building a simulation model.
pdf
Modeling Methodology
Stochastic and Queueing Systems
Chair: Cristina Ruiz-Martín (Carleton University)
Scheduling Queues with Simultaneous and Heterogeneous Requirements from Multiple Types of Servers
Noa Zychlinski, Carri W. Chan, and Jing Dong (Columbia University)
Abstract Abstract
We study the scheduling of a new class of multi-class multi-pool queueing systems where different classes of customers have heterogeneous -- in terms of the type and amount -- resource requirements. In particular, a customer may require different numbers of servers from different server pools to be allocated simultaneously in order to be served. We apply stochastic simulation to study properties of the model and identify two types of server idleness: avoidable and unavoidable idleness, which play important, but different, roles in dictating system performance, and need to be carefully managed in scheduling. To minimize the long-run average holding cost, we propose a generalization of the $c\mu$-rule, called Generalized Idle-Aware (GIA) $c\mu$-rule. We provide insights into how to set the hyper parameters of the GIA $c\mu$-rule. We also demonstrate that, with properly chosen hyper parameters, the GIA $c\mu$-rule achieves superior and robust performance compared to reasonable benchmarks.
pdf
Integrated Performance Evaluation of Extended Queueing Network Models with Line
Giuliano Casale (Imperial College London)
Abstract Abstract
Despite the large literature on queueing theory and its applications, tool support to analyze these models is mostly focused on discrete-event simulation and mean-value analysis (MVA). This circumstance diminishes the applicability of other types of advanced queueing analysis methods to practical engineering problems, for example analytical methods to extract probability measures useful in learning and inference. In this tool paper, we present LINE 2.0, an integrated software package to specify and analyze extended queueing network models. This new version of the tool is underpinned by an object-oriented language to declare a fairly broad class of extended queueing networks. These abstractions have been used to integrate in a coherent setting over 40 different simulation-based and analytical solution methods, facilitating their use in applications.
pdf
Approximating the Levy-frailty Marshall-Olkin model for failure times
Javiera Barrera and Guido Lagos (Universidad Adolfo Ibañez)
Abstract Abstract
In this paper we approximate the last, close-to-first, and what we call quantile failure times of a system, when the system-components' failure times are modeled according to a Levy-frailty Marshall-Olkin (LFMO) distribution. The LFMO distribution is a fairly recent model that can be used to model components failing simultaneously in groups. One of its prominent features is that the failure times of the components are conditionally iid; indeed, the failure times are iid exponential when conditioned on the path of a given Levy subordinator process. We are motivated by further studying the order statistics of the LFMO distribution, as recently Barrera and Lagos (2020) showed an atypical behavior for the upper-order statistics. We are also motivated by approximating the system when it has an astronomically large number of components. We perform computational experiments that show significative variations in the convergence speeds of our approximations.
pdf
Project Management and Construction
Project Management and Construction
Robots on Construction Sites
Chair: Jing Du (University of Florida)
An Immersive Virtual Learning Environment for Worker-Robot Collaboration on Construction Sites
Pooya Adami, Tenzin Doleck, Burcin Becerik-Gerber, Lucio Soibelman, Yasemin Copur-Gencturk, and Gale Lucas (University of Southern California)
Abstract Abstract
This paper presents a Virtual Learning Environment (VLE) that simulates the collaboration between a construction robot and a construction worker in order to support training in worker-robot teamwork on construction sites. The VLE presented in the paper utilizes a semi-autonomous remote-controlled demolition robot named Brokk. This paper describes the use-case robot (Brokk), system setup and configuration, and the learning scenarios. In this VLE, the trainee learns about the robot’s various components, safety management, functions of the robot's control box, and general guidelines used during common demolition tasks. Importantly, this VLE is developed based on adult learning theories in general—and andragogy principles in particular. In addition, we augment the VLE with useful features from existing VLEs. The effectiveness of the developed VLE will be evaluated with construction workers based on knowledge and performance assessments.
pdf
Toward Intelligent Workplace: Prediction-Enabled Proactive Planning for Human-Robot Coexistence on Unstructured Construction Sites
Da Hu (The University of Tennessee, Knoxville); Jiannan Cai (The University of Texas at San Antonio); Yuqing Hu (Pennsylvania State University); and Shuai Li (The University of Tennessee, Knoxville)
Abstract Abstract
Construction robot path planning is critical for safe and effective human-robot collaboration in future intelligent workplaces. While many studies developed methods to generate paths for construction robots, very few, if any, have integrated the worker trajectory prediction on the jobsite. The objective of this research is to find a safe and efficient robot path, meanwhile, taking into account the predicted movement of construction workers. To this end, we propose a context-aware Long Short-Term Memory (LSTM)-based method to predict worker’s trajectory. Based on the predicted trajectory, the A* and Dynamic Window Approach (DWA) are used to find an optimal path for the robot. The efficiency and effectiveness of the proposed method are manifested by simulated and field experiments. The proposed method will contribute to the body of knowledge for prediction-based construction robots path planning and provide the potential to be integrated into existing robot platforms to enhance their performance.
pdf
Neural Functional Analysis In Virtual Reality Simulation: Example Of A Human-Robot Collaboration Tasks
Qi Zhu and Jing Du (University of Florida)
Abstract Abstract
Human-robot collaboration has gained its popularity with the fast evolution of the Industry 4.0. One of the challenges of HRC is human-robot interface design that adapts to the personalized needs. This paper presents a method of using Virtual Reality (VR) simulation as a testbed and data collector for examining and modeling personal reactions to different human-robot interface designs. To obtain real-time leading indicator of human performance, this study focuses on the neural functional analysis in VR. An integrated system is presented using eye-tracking and force input data as event makers for Neuroimaging technique, i.e., Functional Near Infrared Spectroscopy (fNIRS). The real-time hemodynamic responses in subjects’ brains are analyzed based on the general linear model (GLM) for modeling neural functional changes under different levels of haptic designs. Our results indicate that the neurobehavioral data collected from the VR environment can be used directly as a personalized model for human-robot interface optimization.
pdf
Project Management and Construction
Sensors for Simulation
Chair: Amir Behzadan (Texas A&M University)
Comparison of Different Beamforming-based Approaches for Sound Source Separation of Multiple Heavy Equipment at Construction Job Sites
Behnam Sherafat and Abbas Rashidi (University of Utah) and Sadegh Asgari (Merrimack College)
Abstract Abstract
Construction equipment performance monitoring can support detecting equipment idle time, estimating equipment productivity rates, and evaluating the cycle time of activities. Each equipment generates unique sound patterns that can be used for equipment activity detection. In the last decade, several audio-based methods are introduced to automate the process of equipment activity recognition. Most of these methods only consider single-equipment scenarios. The real construction job site consists of multiple machines working simultaneously. Thus, there is an increasing demand for advanced techniques to separate different equipment sound sources and evaluate each equipment’s productivity separately. In this study, six beamforming-based approaches for construction equipment sound source separation are implemented and evaluated using real construction job site data. The results show that Frost beamformer and time-delay Linear Constraint Minimum Variance (LCMV) generate outputs with array gains of more than 4.0, which are more reliable than the other four beamforming techniques for equipment sound separation.
pdf
Deep Generative Adversarial Network to Enhance Image Quality for Fast Object Detection in Construction Sites
Nipun Nath and Amir H. Behzadan (Texas A&M University)
Abstract Abstract
Visual recognition of the content and actions that take place in a construction site is important in many applications such as data-driven simulation, autonomous systems, and intelligent machinery. Construction project, however, are dynamic and complex, and often take place in harsh environments. This may hinder the ability to collect good quality, well-lit, and occlusion-free imagery, which in turn, can lower the performance of computer vision models for fast and reliable object detection. In this paper, we propose and validate a deep convolutional neural network (CNN)-based generative adversarial network (GAN) trained and tested on construction site photos from two in-house datasets to increase image resolution by generating missing pixel information. Results show that using GAN-enhanced images can improve the average precision of pre-trained models for detecting objects such as building, equipment, worker, hard hat, and safety vest by up to 32% while maintaining the overall processing time for real-time object detection.
pdf
A Thermal-Based Technology for Roller Path Tracking and Mapping in Pavement Compaction Operations
Linjun Lu and Fei Dai (West Virginia University)
Abstract Abstract
Compaction is one of the most important phases in construction of asphalt concrete (AC) pavements, as it directly affects the density and thereby the performance of pavements. This paper proposed a thermal-based compaction technology for real-time roller path tracking and mapping in pavement compaction operations, based on which roller operators can better control their compaction quality. In the proposed method, the incremental change of a roller position in a short interval was decomposed into two motion components (i.e., the change in heading direction and the linear translation). The global position of the roller was then recovered by chaining the frame-by-frame motion in terms of their changes. Two sets of experimental data from different pavement construction sites were used to test the performance of the proposed technology. The results showed that the developed technology is a promising alternative to the current GPS-based intelligent compaction (IC) in pavement compaction operations.
pdf
Project Management and Construction
Project Management
Chair: Ming Lu (University of Alberta)
A Tale of Two Simulations for Project Managers
Sanjay Jain (The George Washington University)
Abstract Abstract
Project managers need to understand the impact of uncertainties on project plans. Two techniques that can meet this need are Monte Carlo simulation (MCS) and discrete event simulation (DES). MCS uses random samples of input parameters to determine the system outputs. DES also uses random samples but models the sequence of events in a system with time modeled explicitly. Both use the results of repeated executions to determine the distribution of outputs. There has been an increasing use of MCS for evaluating the impact of uncertainties in activity durations and costs on the project’s duration and total cost. There are few reports of use of DES for such purpose. This paper presents analyses of a hypothetical project using both the simulation approaches. The results show that discrete event simulation has an advantage for the scope of this study and based on the features and limitations of the software used.
pdf
A Specification for Effective Simulation Project Management
Allen G. Greenwood (FlexSim Software Products, Inc.)
Abstract Abstract
This paper proposes a specification for effectively defining and managing simulation projects. The specification provides a structured and ordered series of tasks that need to be completed in order for a simulation endeavor to be successful; i.e., it sets forth a set of requirements to guide a simulation project through the Project Management Institute’s five phases: initiation, planning, execution, monitoring and controlling, and closing. The specification is applicable to any type of simulation, in any domain, for any purpose, and for any scale.
pdf
On The Use Of Simulation-Optimization In Sustainability Aware Project Portfolio Management
Miguel Saiz (Universitat Oberta de Catalunya, EAE Business School); Marisa Andrea Lostumbo (Universitat Oberta de Catalunya); Angel Alejandro Juan (Universitat Oberta de Catalunya, Euncet Business School); and David Lopez-Lopez (ESADE business school, EAE Business School)
Abstract Abstract
Among other variables, uncertainty and limitation of resources make real-life project portfolio management a complex activity. Simulation-optimization is considered an appropriate technique to face stochastic problems like this one. The main objective of this paper is to develop a hybrid model, which combines optimization with Monte Carlo simulation, to deal with stochastic project portfolio management. A series of computational experiments illustrate how this hybrid approach can include uncertainty into the model, and how this is an essential contribution for informed decision making. A relevant novelty is the inclusion of a sustainability dimension, which allows managers to select and prioritize projects not only based on their monetary profitability but also taking into account the associated environmental and/or social impact. This additional criterion can be necessary when evaluating projects in areas such as civil engineering, building and construction, or urban transformation.
pdf
Project Management and Construction
Information and Planning
Chair: Wenying Ji (George Mason University)
Planning and Scheduling Drainage Infrastructure Maintenance Operations Under Hard and Soft Constraints: A Simulation Study
Monjurul Hasan, Ming Lu, and Simaan AbouRizk (University of Alberta) and Jason Neufeld (EPCOR Utilities Inc.)
Abstract Abstract
The problem of planning drainage services in a certain timeframe by a typical municipal infrastructure maintenance organization lends itself well to existing solutions for resource constrained project scheduling optimization. However, the optimized schedule could be deemed insufficient from a practitioner’s perspective as the very crucial soft constraint on the competence of a particular crew handling different jobs is excluded. We define a crew-job matching index ranging from 0 to 1 to allow for a planner's assessment to be factored in job schedule simulation and optimization. The total crew-job matching index (TCJMI) is further defined, accounting for all the crew-job assignments and indicating the fitness of a formulated plan. TCJMI is maximized in the resource-constrained schedule optimization by applying Excel Solver add-in. As such, the planner’s preference and experience can be represented and factored in crew-job scheduling optimization, which is demonstrated through conducting “what-if” simulation scenario analyses.
pdf
Automated Abstraction of Operation Processes from Unstructured Text for Simulation Modeling
Yitong Li and Wenying Ji (George Mason University) and Simaan M. AbouRizk (University of Alberta)
Abstract Abstract
Abstraction of operation processes is a fundamental step for simulation modeling. To reliably abstract an operation process, modelers rely on text information to study and understand details of operations. Aiming at reducing modelers’ interpretation load and ensuring the reliability of the abstracted information, this research proposes a systematic methodology to automate the abstraction of operation processes. The methodology applies rule-based information extraction to automatically extract operation process-related information from unstructured text and creates graphical representations of operation processes using the extracted information. To demonstrate the applicability and feasibility of the proposed methodology, a text description of an earthmoving operation is used to create its corresponding graphical representation. Overall, this research enhances the state-of-the-art simulation modeling through achieving automated abstraction of operation processes, which largely reduces modelers’ interpretation load and ensures the reliability of the abstracted operation processes.
pdf
Understanding the Dynamics of Information Flow During Disaster Response Using Absorbing Markov Chains
Yitong Li and Wenying Ji (George Mason University)
Abstract Abstract
This paper aims to derive a quantitative model to evaluate the impact of information flow on the effectiveness of disaster response. At the core of the model is a specialized absorbing Markov chain that models the process of delivering federal assistance to the community while considering stakeholder interactions and information flow uncertainty. Using the proposed model, the probability of community satisfaction is computed to reflect the effectiveness of disaster response. A hypothetical example is provided to demonstrate the applicability and interpretability of the derived quantitative model. Practically, the research provides governmental stakeholders interpretable insights for evaluating the impact of information flow on their disaster response effectiveness so that critical stakeholders can be targeted proactive actions for enhanced disaster response.
pdf
Project Management and Construction
Integrating Simulation with Other Methods I
Chair: Joseph Louis (Oregon State University)
Predicting Terminal Mid-Air Collisions Through Simulation Experiments of Air Traffic Control
Yanyu Wang, Pingbo Tang, and Ying Shi (Carnegie Mellon University); Yongming Liu (Arizona State University); and Nancy J. Cooke (Arizona State University, Human Systems Engineering)
Abstract Abstract
The workload of air traffic controllers (ATCs) is increasing due to the growing air traffic. Early alarms of loss of separation (LoS) events between aircraft are critical for ATCs to coordinate intensive traffic safely. The authors studied the time series of traffic densities and numbers of turning aircraft in a given sky section as early indicators of pending LoS. Simulator experiment produced data for comparing the prediction accuracies of the logistic regression models generated from the time series of traffic densities and numbers of turning aircraft, and combinations of these two. We studied different sections of the time series to examine the possibility of early detection and found that 1) the regression model based on the traffic density time series is more accurate than the model using the numbers of turning aircraft; 2) properly combining sections of the time series could produce models that achieve earlier predictions without losing accuracy.
pdf
A Green Performance Bond Framework for Managing Greenhouse Gas Emissions During Construction: Proof of Concept Using Agent-Based Modeling
Sadegh Asgari (Merrimack College, Columbia University)
Abstract Abstract
A+C and A+B+C bidding methods have been recognized as innovative green contracting strategies for addressing climate change by reducing greenhouse gas emissions during the construction phase of infrastructure projects. However, there are practical issues including the possibility of opportunistic bidding that casts doubt on the successful implementation of these bidding methods. This study introduces a green performance bond framework as a potential solution and evaluates its feasibility and effectiveness in discouraging opportunistic bidding behaviors. In doing so, the A+C bidding environment is simulated using agent-based modeling and conduct simulation experiments in which contractors attempt to increase their probability of winning by intentionally submitting an unrealistic emission mitigation plan. The results show that applying the green performance bond framework can significantly reduce the over-emission and the probability of success of an opportunistic bid in all bidding scenarios.
pdf
Project Management and Construction
Integrating Simulation with Other Methods II
Evaluation and Selection of Hospital Layout Based on an Integrated Simulation Method
Yongkui Li, Yan Zhang, and Lingyan Cao (Tongji University)
Abstract Abstract
Space planning and management, as an important part of facility management in the hospital context, is closely related to patient flow and patient behavior. The lack of consideration of such interdependencies would complicate movement patterns of end-users during consultation and treatment process, and as a result, reduce the hospital operational efficiency. To facilitate better space planning and management in hospitals, this study integrates discrete-event simulation and agent-based simulation to examine and evaluate different layout designs. The constructed simulation models take into account the patient flow and patient behavior. At the same time, on-site surveying and monitoring data as well as realistic medical information are used as inputs for the simulation model. Simulation results including patient lead time and facility utilization were used for hospital layout selection. This research provides a new approach to layout design selection and contributes to more effective and efficient space planning and management in healthcare facility.
pdf
Optimizing Labor Allocation In Modular Construction Factory Using Discrete Event Simulation And Genetic Algorithm
Khandakar M. Rashid (Oregon State University; Civil Engineering Student Association, Bangladesh University of Engineering and Technology); Joseph Louis (Oregon State University); and Colby Swanson (Momentum Innovation Group)
Abstract Abstract
Modular construction is gaining popularity in the USA due to several advantages over stick-built methods in terms of reduced waste and time. Since modular construction factories operate as an assembly line, the number of workers at various workstations dictates the efficiency of the overall production. This paper presents a resource allocation framework combining discrete event simulation (DES) model and genetic algorithm (GA) to facilitate data-driven decision making. The DES model simulates the process of constructing modular units in the factory, and the GA optimizes the number of the worker at different workstations yielding to minimize makespan. A case study with real-world modular construction factory showed that optimizing the assignment of available workers can reduce the makespan by up to 15%. This study demonstrates the potential of the proposed method as a practical tool to optimize resource allocation the uncertain work environments the in modular construction factories.
pdf
Simulation and the Humanities
Track Coordinator - Simulation and the Humanities: F. LeRon Shults (University of Agder), Andreas Tolk (MITRE Corporation)
Simulation and the Humanities
Simulation and the Humanities
Chair: Andreas Tolk (MITRE Corporation)
Simulating Epidemics: Why the Models Failed Us
Eric Winsberg (University of South Florida)
Abstract Abstract
Since the beginning of the pandemic, models have played a larger role in guiding human affairs than perhaps ever in history. Simple models have been used to predict the “herd immunity threshold” for Covid-19. More complex models have been used to predict the natural course of the disease and project the impact of various candidate interventions. Causal modeling has been used to infer the (counterfactual) effects of past interventions. Some of the decisions that have been guided by these models have been disastrous. The brazen character of some of the inferences that have been drawn and widely publicized will likely diminish the future credibility of science in an increasingly politically fractured world. Why has this happened? How can we do better in the future?
pdf
Best Friends Forever? Modelling the Mechanisms of Friendship Network Formation
Ivan Puga-Gonzalez (University of Agder), Kevin McCaffree (University of North Texas), and Fount LeRon Shults (NORCE)
Abstract Abstract
The formation of friendships and alliances is a ubiquitous feature of human life, and likely a crucial component of the cooperative hunting and child-rearing practices that helped our early hominin ancestors survive. Research on contemporary human beings typically finds that strong-tie social networks are fairly small, and reveals a high degree of physical (e.g., age) and social-structural (e.g., educational attainment) homophily. Yet, existing work all too often underestimates, or even ignores, the importance of abstract, symbolic homophily (such as shared identities or worldviews) as a driver of friendship formation. Here we employ agent-based modeling to identify the optimal variable weights influencing friendship formation in order to best replicate the results of existing empirical work. We include indicators of physical and social-structural homophily, in addition to symbolic homophily. Results suggest that the optimization values that best replicate existing empirical work include strong variable weightings of kinship, shared worldview, and outgroup suspicion.
pdf
Simulation Applications In Humanitarian Logistics: A Systematic Literature Review
Camila Laura Pareja Yale, Márcia Lorena da Silva Frazão, Marco Aurélio de Mesquita, and Hugo Tsugunobu Yoshida Yoshizaki (University of São Paulo)
Abstract Abstract
Humanitarian logistics involve planning, acting and controlling situations in which vulnerable people are involved. Simulation techniques can successfully represent the dynamics of disasters, characterized by situations of complexity and uncertainties. This work aims to review papers that apply simulation techniques to solve problems that arise in disaster situations. To accomplish this, we analysed 33 papers published from 2010 to 2019, indexed in Scopus or Web of Science databases, that used simulation techniques to solve humanitarian problems. The descriptive analysis conducted found out that there is a small but increasing number of papers over time. Discrete events simulation and system dynamics are the most used techniques. From the content analysis we find that most research focuses on short term decisions and solving problems related to post-disaster response in unforeseen events. However, topics such as evacuation and transportation of victims are still unexplored.
pdf
Simulation for Global Challenges
Track Coordinator - Simulation for Global Challenges: Dave Goldsman (Georgia Institute of Technology), Stewart Robinson (Loughborough University)
Simulation for Global Challenges
Simulation for Global Challenges
Chair: Stewart Robinson (Loughborough University)
Modeling And Simulation For Decision Support In Precision Livestock Farming
Parisa Niloofar (University of Southern Denmark), Deena P. Francis (Karunya Institute of Technology and Sciences), Sanja Lazarova-Molnar (University of Southern Denmark), Alexandru Vulpe (University Politechnica of Bucharest), and George Suciu and Mihaela Balanescu (BEIA Consult International)
Abstract Abstract
Precision Livestock Farming (PLF) is a system that allows real-time monitoring of animals, which comes with many benefits and ensures maximum use of farm resources, thus controlling the health status of animals. Decision support systems in livestock sector help farmers to take actions in support of animal health and better product yield. Due to the complexity of decision making processes, modeling and simulation tools are being extensively used to support farmers and decision makers in livestock industries. Modeling and simulation approaches minimize the risk of making wrong decisions and helps to assess the impact of different strategies before applying them in reality. In this paper, we highlight the role of modeling and simulation in enhancing decision-making processes in precision livestock farming, and provide a comprehensive overview and categorization with respect to the relevant goals and simulation paradigms. We, further, discuss the associated optimization approaches and data collection challenges.
pdf
Modelling Migration: Decisions, Processes and Outcomes
Jakub Bijak, Philip A. Higham, Jason Hilton, Martin Hinsch, Sarah Nurse, and Toby Prike (University of Southampton); Oliver Reinhardt (University of Rostock); Peter WF Smith (University of Southampton); and Adelinde M. Uhrmacher (University of Rostock)
Abstract Abstract
Human migration is uncertain and complex, and some of its distinct features, such as migration routes, can emerge and change very rapidly. Agency of various actors is one key reason for why migration eludes attempts at its theoretical description, explanation and prediction. To address the complexity challenges through simulation models, which would coherently link micro-level decisions with macro-level processes, a coherent model design and construction process is needed. Here, we present such a process alongside its five building blocks: an agent-based simulation of migration route formation, resembling the recent asylum migration to Europe; an evaluation framework for migration data; psychological experiments eliciting decisions under uncertainty; the choice of a programming language and modelling formalisms; and statistical analysis with Bayesian meta-modelling based on Gaussian Process assumptions and experimental design principles. This process allows to identify knowledge advancements that can be achieved through modelling, and to elucidate the remaining knowledge gaps.
pdf
Inventory Management with Disruption Risk
Canan Gunes Corlu (Boston University); Bahar Biller (SAS Institute, Inc); Elliot Wolf (The Chemours Company); and Enver Yucesan (INSEAD)
Abstract Abstract
Initiatives like lean manufacturing, pooling, and postponement have been effective in mitigating the cost service trade-off by maintaining high levels of service while reducing system inventories. However, such initiatives exacerbate supply chain disruptions during a catastrophic event, thereby creating a new trade-off between robustness during disruptions and efficiency during normal operations. We evaluate stocking decisions in the presence of operational disruptions, which represent different risks from those associated with demand uncertainties as they stop production flow and typically persist longer. Operational disruptions can therefore be much more devastating though their likelihood of occurrence may be low. Using stochastic simulation, we combine the newsvendor and order-up-to models capturing demand uncertainty costs with
catastrophe models capturing not only the cost of supply disruption, but also the cost of recovery, to obtain insights for managing inventory under disruption risk.
pdf
Simulation in Industry 4.0
Track Coordinator - Simulation in Industry 4.0: Young Jae JANG (Daim Research, Korea Advanced Institute of Science and Technology), Andrea Matta (Politecnico di Milano)
Simulation in Industry 4.0
Digital Twin Applications in Production I
Chair: Young Jae JANG (Korea Advanced Institute of Science and Technology, Daim Research)
A Digital Twin Framework for Real-time Analysis and Feedback of Repetitive Work in the Manual Material Handling Industry
Abhimanyu Sharotry, Jesus A. Jimenez, David Wierschem, Francis A. Mendez Mediavilla, Rachel M. Koldenhoven, Damian Valles, George Koutitas, and Semih Aslan (Texas State University)
Abstract Abstract
This research presents a digital twin concept and prototype to represent human operators in the material handling industry. To construct the digital twin, we use a simulation-based framework for data collection and analysis. The framework consists of three modules: Data Collection Module, Operator Analysis and Feedback Module and Digital Twin Module. A motion capture system assists in the development of the digital twin, which captures simulated material handling activities, similar to those which take place in an actual environment. This paper outlines the processes involved in the development of the digital twin and summarizes the results of pilot experiments to analyze the operator’s fatigue as the operator completes repetitive motions associated with lifting tasks. Fatigue, in this study, is a function of change in joint angles. The digital-twin based tool provides feedback to the operator in real-time to enable correction of those factors which potentially cause injuries to the operator.
pdf
Robot Collaboration Intelligence with AI
Illhoe Hwang (KAIST); Young Jae Jang (KAIST, Daim Research); Seol Hwang, Sangpyo Hong, and Hyunsuk Baek (KAIST); and Kyuhun Hahn (Daim Research)
Abstract Abstract
We present a current automation trend, \textit{robot collaboration intelligence}, to control and manage individual industrial robots to collaborate intelligently with advanced AI technologies. To increase the level of flexibility in manufacturing lines and warehouse/distribution centers, flexible agent–type robots such as automated guided vehicles have been adopted in many industries. As information technologies advance, these individual agent robots become smart and the fleet size of agents becomes larger. Robot collaboration intelligence is a newly emerging technology that allows intelligent robots to work in a more effective and efficient way. We introduce this emerging technology with industry cases and provide researchers with new research directions in automation and simulation with AI.
pdf
A Case Study of Digital Twin for a Manufacturing Process Involving Human Interactions
Hasan Latif (North Carolina State University), Guodong Shao (NIST), and Binil Starly (North Carolina State University)
Abstract Abstract
Current algorithms, computations, and solutions that predict how humans will engage in smart manufacturing are insufficient for real-time activities. In this paper, a digital-twin implementation of a manual, manufacturing process is presented. This work (1) combines simulation with data from the physical world and (2) uses reinforcement learning to improve decision making on the shop floor. An adaptive simulation-based, digital twin is developed for a real manufacturing case. The digital twin demonstrates the improvement in predicting overall production output and solutions to existing problems.
pdf
Simulation in Industry 4.0
Digital Twin Applications in Production II
Chair: Andrea Matta (Politecnico di Milano)
A Generic Workflow Engine for Iterative, Simulation-based Non-Linear System Identifications
Michael Polter, Peter Katranuschkov, and Raimar J. Scherer (Technische Universität Dresden)
Abstract Abstract
A generic process for simulation-based system identification for civil engineering problems is presented. It is iterative and contains a feedback loop for the continuous refinement of the system, taking into account the sensor values of the real object. The process was implemented in a generic workflow engine that can be integrated into software systems and adapted to specific application scenarios. An exemplary implementation in the context of a cyber-physical system for dynamic adaptation of production processes shows the feasibility and the advantages of the developed concept.
pdf
A Design of Digital Twins for Supporting Decision-making in Production Logistics
Yongkuk Jeong, Erik Flores-García, and Magnus Wiktorsson (KTH Royal Institute of Technology)
Abstract Abstract
Recent studies suggest that data-driven decision-making facilitated by Digital Twins (DTs) may be essential for optimizing resources and diversifying value creation in production logistics. However, there exists limited understanding about the design of DTs in production logistics. Addressing this issue, this study proposes a process for the design of DTs in production logistics. This study extends related works describing the dimensions of DTs in manufacturing, and adopts a process perspective based on production development literature. The results present a process for the design of DTs including activities in pre-study, conceptual, and detailed design phases corresponding to five DT dimensions. The proposed process is validated during the development of a DT in a production logistics lab at an academic environment. The findings of this study may be essential for avoiding misplaced resources and lost opportunities in the design of DTs in production logistics, and facilitating the planning and resource allocation.
pdf
Simulation in Industry 4.0
Data-Driven Simulation Modeling and Analysis
Chair: Joachim Hunker (Technische Universität Dortmund)
Digital Twins in Simulative Applications: A Taxonomy
Hendrik van der Valk, Joachim Hunker, and Markus Rabe (Technische Universität Dortmund) and Boris Otto (Fraunhofer Institute for Software and Systems Engineering ISST, Technische Universität Dortmund)
Abstract Abstract
With the advances in information technology, the concept of Digital Twins has gained wide attention in both practice and research in recent years. A Digital Twin is a virtual representation of a physical object or system and is connected in a bi-directional way with the physical counterpart. The aim of a Digital Twin is to support all stakeholders during the whole lifecycle of such system or object. One of the core aspects of a Digital Twin is modeling and simulation, which is a well-established process, e.g., in the development of systems. Simulation models can be distinguished on the basis of different dimensions, e.g., on the basis of their time perspective. The existing literature reviews have paid little to no attention to this simulation aspect of a Digital Twin. In order to address this, the authors have developed a taxonomy based on an extended literature review to bridge the aforementioned gap.
pdf
Generation And Tuning Of Discrete Event Simulation Models For Manufacturing Applications
Giovanni Lugaresi and Andrea Matta (Politecnico di Milano)
Abstract Abstract
In order to successfully develop and apply cyber-physical systems in manufacturing it is fundamental to
ensure the availability of up-to-date digital models. Production systems are intricate environments and are subject to frequent changes due to both external and internal factors. Therefore, it is complex to guarantee that a digital model can correctly represent the real system at any time. Literature is rich with methods for the automated generation of discrete event simulation models. However, the generated representations may be excessively accurate and describe also unnecessary activities. The automated development of digital models with an appropriate level of detail can avoid useless efforts and misguided predictions. This paper proposes a method that generates a simulation model and adjusts its level of detail exploiting the manufacturing system properties. The method has been applied in two test cases and can be used effectively to generate both Petri Net and simulation graph models.
pdf
Data-Driven Fault Tree Modeling For Reliability Assessment Of Cyber-Physical Systems
Sanja Lazarova-Molnar, Parisa Niloofar, and Gabor Kevin Barta (University of Souther Denmark)
Abstract Abstract
Reliability of a system is a measure of the likelihood that the system will perform as expected for a predefined time period. Fault tree analysis, as a popular method for analyzing reliability of systems, has gained a widespread use in many areas, such as the automotive and aviation industry. Fault trees of systems are usually designed by domain experts, based on their knowledge. Since systems change their behaviors during their lifetimes, knowledge-driven fault trees might not accurately reflect true behaviors of systems.
Thus, deriving fault trees from data would be a better alternative, especially for non-safety-critical systems, where the amounts of data on faults can be significant. We present an approach and a tool for Data-Driven Fault Tree Analysis (DDFTA) that extract fault trees from time series data of a system, and uses simulation to analyze the extracted fault trees to estimate reliability measures of systems.
pdf
Simulation in Industry 4.0
New Frameworks for Industry 4.0 Applications
Chair: Martijn Mes (University of Twente)
A Method Proposal For Conducting Simulation Projects In Industry 4.0: A Cyber-Physical System In An Aeronautical Industry
José Arnaldo Barra Montevechi, Carlos Henrique dos Santos, Gustavo Teodoro Gabriel, Mona Liza Moura de Oliveira, José Antônio de Queiroz, and Fabiano Leal (Universidade Federal de Itajubá)
Abstract Abstract
In Industry 4.0, computer simulation has been changing its application. Simulation models focused on specific analysis give space to models with automation degree and they are integrated with different systems. This integration favors the constant model updating based on variations in the real system, enabling faster decision making. In this way, simulation models are part of cyber-physical systems since they represent a virtual version of the real process. In this context, the present work proposed a method adapted from Montevechi et al. (2010) to carry out simulation projects according to Industry 4.0 principles. Finally, the proposed method was applied in a supply material process in an aeronautical industry, validating it and showing how the simulation works in the Industry 4.0.
pdf
A General Simulation Framework for Smart Yards
Matteo Brunetti, Martijn Mes, and Jelle van Heuveln (University of Twente)
Abstract Abstract
This paper presents a simulation framework for the logistics operations at Smart Yards. A Smart Yard is a digital and physical system enabling the collaboration of various companies at a logistics hub, e.g., seaport, airport or hinterland distribution center, and characterized by a decoupling point, automated vehicles for internal handling of cargo, and data sharing technologies. The framework is a high-level conceptual model for a hybrid Discrete Event and Agent-Based Simulation, comprising inputs, outputs, assumptions, flowcharts, and agents representing the complex interrelation of stakeholders and shared autonomous vehicles. We illustrate the concept of Smart Yard using three case studies and apply our simulation framework to one of these cases by analyzing the use of a Smart Yard at Amsterdam Airport Schiphol.
pdf
Scalable, Reconfigurable Simulation Models in Industry 4.0-Oriented Enterprise Modeling
Mahdi El Alaoui El Abdellaoui (elm.leblanc, Bosch Group; Mines Saint-Etienne); Frédéric Grimaud, Paolo Gianessi, and Xavier Delorme (Mines Saint-Etienne); and Emmanuel Bricard (elm.leblanc, Bosch Group)
Abstract Abstract
The increasingly unpredictable demand is among the major challenges of Industry 4.0, asking manufacturing systems (MS) for a capacity of adaptation that accelerates the decision processes. Decision Support Systems (DSS), and among them Simulation-based DSS, must therefore result from new Enterprise Modeling (EM) frameworks, capable of taking advantage of advances in data integration and processing possibilities. After briefly introducing such a new EM framework, MEMO I4.0, this article shows how to use it to derive scalable, flexible simulation models, so as to improve and monitor the MS processes and finally address this challenge. An example to generate a simulation model for a real case study is presented.
pdf
Track Coordinator - Simulation Optimization: Susan R. Hunter (Purdue University), Peter Salemi (MITRE Corporation)
Simulation Optimization
Constraints I
Chair: Andrea Matta (Politecnico di Milano)
Sample-Path Algorithm for Global Optimal Solution of Resource Allocation in Queueing Systems with Performance Constraints
Mengyi Zhang and Andrea Matta (Politecnico di Milano) and Arianna Alfieri (Politecnico di Torino)
Abstract Abstract
Resource allocation problems with performance constraints (RAP--PC) are a category of optimization problems on queueing system design, widely existing in operations management of manufacturing and service systems. RAP--PC aims at finding the system with minimum cost while guaranteeing target performance, which usually can be obtained only by simulation due to the complexity of practical systems. This work considers the optimization of the integer--ordered
variables whom the system performance is monotonic on. This work proposes an algorithm providing a sample--path exact solution within finite time. Specifically, the algorithm works on the mathematical programming model of RAP--PC and uses logic--based exact and gradient-based approximate feasibility cuts to define and reduce the feasible region. Results on randomly generated instances show that the proposed approach can solve at optimality up to 9--dimension problems within two hours, and feasible good quality solutions can be found faster than the state--of--the--art algorithm.
pdf
Batch Bayesian Active Learning for Feasible Region Identification by Local Penalization
Jixiang Qing, Nicolas Knudde, Ivo Couckuyt, and Tom Dhaene (Ghent University) and Kohei Shintani (Toyota Motor Corporation)
Abstract Abstract
Identifying all designs satisfying a set of constraints is an important part of the engineering design process. With physics-based simulation codes, evaluating the constraints becomes considerable expensive. Active learning can provide an elegant approach to efficiently characterize the feasible region, i.e., the set of feasible designs. Although active learning strategies have been proposed for this task, most of them are dealing with adding just one sample per iteration as opposed to selecting multiple samples per iteration, also known as batch active learning. While this is efficient with respect to the amount of information gained per iteration, it neglects available computation resources. We propose a batch Bayesian active learning technique for feasible region identification by assuming that the constraint function is Lipschitz continuous. In addition, we extend current state-of-the-art batch methods to also handle feasible region identification. Experiments show better performance of the proposed method than the extended batch methods.
pdf
Simulation Optimization
Constraints II
Chair: Dashi I. Singham (Naval Postgraduate School)
Sample Average Approximation for Functional Decisions Under Shape Constraints
Dashi Singham (Naval Postgraduate School) and Henry Lam (Columbia University)
Abstract Abstract
Sample average approximation methods are most often applied when the set of decision variables is finite. This research develops a method of finding optimal solutions to infinite-dimensional simulation optimization problems when the decision variable is a monotone function on a random variable used to model the uncertainty itself. This problem is motivated from approximately solving the principal-agent problem in economics, but also has close connections to nonparametric statistical estimation. We demonstrate how to approximate the infinite-dimensional problem with a discrete formulation that allows the use of standard sample average approximation methods. We also demonstrate how to utilize related bounding techniques on the optimal value, and show convergence results for the estimated optimal values and solutions.
pdf
Feasibility Determination When the Feasible Region is Defined by Non-linear Inequalities
Daniel Solow (Case Western Reserve University), Roberto Szechtman (Naval Postgraduate School), and Enver Yucesan (INSEAD)
Abstract Abstract
Heuristic algorithms exist for determining whether each of r systems, characterized by m performance measures estimated through Monte Carlo simulation, belongs to a given set \Gamma when |Gamma is defined by a finite collection of linear inequalities. The work here provides a heuristic for addressing a version of this problem in which \Gamma is defined by a finite collection of nonlinear inequalities. This approach allows the user to choose a desired level of confidence with which a system is correctly classified. The algorithm then uses appropriate sized confidence rectangles centered at the estimated means to decide when a system can be classified as in \Gamma or not in \Gamma, with the specified level of confidence. While the worst-case behavior could potentially be bad, computational experiments show that the performance of the algorithm on randomly generated problems is satisfactory
pdf
Identifying the Best System in the Presence of Stochastic Constraints with Varying Thresholds
Yuwei Zhou, Sigrun Andradottir, and Seong-Hee Kim (Georgia Institute of Technology)
Abstract Abstract
We consider the problem of finding a system with the best primary performance measure among a finite number of simulated systems in the presence of stochastic constraints on secondary performance measures. When no feasible system exists, the decision maker may be willing to consider looser thresholds. Given that there is no change in the underlying simulated systems, we adopt the concept of green simulation and perform feasibility check across all potential thresholds simultaneously. We propose an indifference-zone procedure that takes multiple threshold values for each constraint based on the user's inputs and returns the best system that is feasible to the most desirable thresholds. We prove that our procedure is statistical valid in that it yields the best system in the most desirable feasible region possible with at least a prespecified probability. Our experimental results show that the proposed procedure performs well with respect to the number of required observations.
pdf
Simulation Optimization
Bayesian Optimization
Chair: Juergen Branke (Warwick Business School)
Modification of Bayesian Optimization for Efficient Calibration of Simulation Models
Daiki Kiribuchi, Masashi Tomita, and Takeichiro Nishikawa (Toshiba Corporation) and Satoru Yokota, Ryota Narasaki, and Soh Koike (Kioxia Corporation)
Abstract Abstract
Simulation models contain many parameters that must be adjusted (calibrated) in advance to reduce the error between simulations and experimental results. Bayesian optimization is often applied to minimize error after only a few simulations. However, Bayesian optimization uses only error information, ignoring information on other simulation results. In this paper, we improve Bayesian optimization by utilizing both and show that other simulation results effectively reduce the dimensionality of the parameter space. In an evaluation using actual semiconductor simulation results, the proposed method reduces the number of simulations by 50% compared with random search and conventional Bayesian optimization.
pdf
Partition-Based Bayesian Optimization for Stochastic Simulations
Songhao Wang and Szu Hui Ng (National University of Singapore)
Abstract Abstract
Bayesian optimization (BO) is a popular simulation optimization approach. Despite its many successful applications, there remain several practical issues that have to be addressed. These includes the non-trivial optimization of the inner acquisition function (search criterion) to find the future evaluation points and the over-exploitative behavior of some BO algorithms. These issues can cause BO to select inferior points or get trapped in local optimal regions before exploring other more promising regions. This work proposes a new partition-based BO algorithm where the acquisition function is optimized over a representative set of finite points in each partition instead of the whole design space to reduce the computational complexity. Additionally, to overcome over-exploitation, the algorithm considers regions of different sizes simultaneously in each iteration, providing focus on exploration in larger regions especially at the start of the algorithm. Numerical experiments show that these features help in faster convergence to the optimal point.
pdf
Bayesian Optimization Searching for Robust Solutions
Hoai Phuong Le and Juergen Branke (University of Warwick)
Abstract Abstract
This paper considers the use of Bayesian optimization to identify robust solutions, where robust means having a high expected performance given disturbances over the decision variables and independent noise in the output. We propose a variant of the well-known Knowledge Gradient acquisition function that has been proposed for the case of optimizing integrals. We empirically evaluate our method on one and two dimensional functions and demonstrate that it significantly outperforms the uniform allocation of sampling points and an alternative approach that estimates each function value by averaging over a random sample.
pdf
Simulation Optimization
Markov Decision Processes
Chair: Yijie Peng (Peking University)
Asynchronous Value Iteration for Markov Decision Processes with Continuous State Spaces
Xiangyu Yang (Fudan University); Jiaqiao Hu (State University of New York, Stony Brook); Jian-Qiang Hu (Fudan University); and Yijie Peng (Peking University)
Abstract Abstract
We propose a simulation-based value iteration algorithm for approximately solving infinite horizon discounted MDPs with continuous state spaces and finite actions. At each time step, the algorithm employs the shrinking ball method to estimate the value function at sampled states and uses historical estimates in an interpolation-based fitting strategy to build an approximator of the optimal value function. Under moderate conditions, we prove that the sequence of approximators generated by the algorithm converges uniformly to the optimal value function with probability one. Simple numerical examples are provided to compare our algorithm with two other existing methods.
pdf
The Actor-Critic Algorithm for Infinite Horizon Discounted Cost Revisited
Abhijit Gosavi (Missouri University of Science and Technology)
Abstract Abstract
Reinforcement Learning (RL) is a methodology used to solve Markov decision processes (MDPs) within simulators. In the classical Actor-Critic (AC), a popular RL algorithm, the values of the so-called actor become unbounded. A recently introduced variant of the AC keeps the actor's values naturally bounded. However, the algorithm's convergence properties have not been established mathematically in the literature. Numerically, the bounded AC was studied under the Boltzmann action-selection strategy, but not under the more popular $\epsilon$-greedy strategy in which the probability of selecting any non-greedy action converges to zero in the limit. The paper revisits the AC framework. A short review of the existing literature in the growing field of ACs is first presented. Thereafter, the algorithm is investigated for its convergence properties, under $\epsilon$-greedy action selection, numerically on a small-scale MDP, as well as mathematically via the ordinary differential equation framework.
pdf
Risk-efficient Sequential Simulation Estimators
Raghu Pasupathy (Purdue University) and Yingchieh Yeh (National Central University)
Abstract Abstract
Using steady state mean estimation as the prototypical context, we present a decision-theoretic framework for sequentially estimating quantities associated with an observable discrete-time stochastic process. Our framework includes weights for the quality of the steady state mean estimator and a linear cost of sampling. We first construct a closed-form expression for the optimal time to stop sampling in the hypothetical case when the autocovariance function of the process is known. This expression inspires a sequential procedure that uses a partially overlapping batch means estimator to ``stand-in" for the area under the autocovariance function. The sequential procedure is asymptotically optimal in the sense that the ratio of its risk and that of the optimal risk in the hypothetical scenario approaches unity in a certain asymptotic regime.
pdf
Simulation Optimization
Gaussian Processes
Chair: Eunhye Song (Pennsylvania State University)
Smart Linear Algebraic Operations for Efficient Gaussian Markov Improvement Algorithm
Xinru Li and Eunhye Song (Pennsylvania State University)
Abstract Abstract
This paper studies computational improvement of the Gaussian Markov improvement algorithm (GMIA) whose underlying response surface model is a Gaussian Markov random field (GMRF).
GMIA’s computational bottleneck lies in the sampling decision, which requires factorizing and inverting a sparse, but large precision matrix of the GMRF at every iteration. We propose smart GMIA (sGMIA) that performs expensive linear algebraic operations intermittently, while recursively updating the vectors and matrices necessary to make sampling decisions for several iterations in between. The latter iterations are much cheaper than the former at the beginning, but their costs increase as the recursion continues and ultimately surpass the cost of the former.
sGMIA adaptively decides how long to continue the recursion by minimizing the average per-iteration cost. We perform a floating-point operation analysis to demonstrate the computational benefit of sGMIA. Experiment results show that sGMIA enjoys computational efficiency while achieving the same search effectiveness as GMIA.
pdf
A Gaussian process based algorithm for stochastic simulation optimization with input distribution uncertainty
Haowei Wang, Szu Hui Ng, and Xun Zhang (National University of Singapore)
Abstract Abstract
Stochastic simulation models are increasingly popular for analyzing complex stochastic systems. However, the input distributions driving the simulation models are typically unknown in practice and are usually estimated from real world data. Since the size of real world data tends to be limited, the resulting estimation of input distribution will contain errors. This estimation error is commonly known as input uncertainty. In this paper, we consider the stochastic simulation optimization problem when the input uncertainty is present and assume that both the input distribution family and the distribution parameters are unknown. Traditional efficient metamodel-based optimization approaches like Efficient Global Optimization (EGO) do not take the input uncertainty into account. This can lead to sub-optimal decisions when the input uncertainty level is high. Here, we adopt a nonparametric Bayesian approach to model the input uncertainty and propose an EGO-based simulation optimization algorithm that explicitly accounts for the input uncertainty.
pdf
Sensitivity Analysis of Arc Criticalities in Stochastic Activity Networks
Peng Wan and Michael C. Fu (The University of Maryland at College Park)
Abstract Abstract
Using Monte Carlo simulation, this paper proposes a new algorithm for estimating the arc criticalities of stochastic activity networks. The algorithm is based on the following result: given the length of all arcs in a network except for the one arc of interest, which is on the critical path (longest path) if and only if its length is greater than a threshold. Therefore, the new algorithm is named Threshold Arc Criticality (TAC). By applying Infinitesimal Perturbation Analysis (IPA) to TAC, an unbiased estimator of the stochastic derivative of the arc criticalities with respect to parameters of arc length distributions can be derived. With a valid estimator of stochastic derivative of arc criticalities, sensitivity analysis of arc criticalities is carried out via simulation of a small test network.
pdf
Simulation Optimization
Gradient-based Optimization
Chair: David J. Eckman (Northwestern University)
Simulation Optimization by Reusing Past Replications: Don't Be Afraid of Dependence
Tianyi Liu and Enlu Zhou (Georgia Institute of Technology)
Abstract Abstract
The main challenge of simulation optimization is the limited simulation budget because of the high computational cost of simulation experiments. One approach to overcome this challenge is to reuse simulation outputs from previous iterations in the current iteration of the optimization procedure. However, due to the dependence among iterations, simulation replications from different iterations are not independent, which leads to the lack of theoretical justification for the good empirical performance. In this paper, we fill this gap by theoretically studying the stochastic gradient descent method with reusing past simulation replications. We show that reusing past replications does not change the convergence of the algorithm, which implies the bias of the gradient estimator is asymptotically negligible. Moreover, we show that reusing past replications reduces the variance of gradient estimators conditioned on the history, which implies that the algorithm can use larger step size sequences to achieve faster convergence.
pdf
Biased Gradient Estimators in Simulation Optimization
David J. Eckman (Northwestern University) and Shane G. Henderson (Cornell University)
Abstract Abstract
Within the simulation community, the prevailing wisdom seems to be that when solving a simulation optimization problem, biased gradient estimators should not be used to guide a local-search algorithm. On the contrary, we argue that for certain problems, biased gradient
estimators may still provide useful directional information. We focus on the infinitesimal perturbation analysis (IPA) gradient estimator, which is biased when an interchange of differentiation and expectation fails. Although a local-search algorithm guided by biased gradient estimators will likely not converge to a local optimal solution, it might be expected to reach a neighborhood of one. We test such a gradient-based search on an ambulance base location problem, demonstrating its effectiveness in a non-trivial example, and present some supporting theoretical results.
pdf
Unbiased Gradient Simulation for Zeroth-order Optimization
Guanting Chen (Stanford University)
Abstract Abstract
We apply the Multi-Level Monte Carlo technique to get an unbiased estimator for the gradient of an optimization function. This procedure requires four exact or noisy function evaluations and produces an unbiased estimator for the gradient at one point. We apply this estimator to a non-convex stochastic programming problem. Under mild assumptions, our algorithm achieves a complexity bound independent of the dimension, compared with the typical one that grows linearly with the dimension.
pdf
Simulation Optimization
Ranking and Selection
Chair: Linda Pei (Northwestern University)
Evaluation of bi-PASS for Parallel Simulation Optimization
Linda Pei and Barry Nelson (Northwestern University) and Susan Hunter (Purdue University)
Abstract Abstract
Cheap parallel computing has greatly extended the reach of ranking & selection (R&S) for simulation optimization. In this paper we present an evaluation of bi-PASS, a R&S procedure created specifically for parallel implementation and very large numbers of system designs. We compare bi-PASS to the state-of-the-art Good Selection Procedure and an easy-to-implement subset selection procedure. This is one of the few papers to consider both computational and statistical comparison of parallel R&S procedures.
pdf
Revisiting Subset Selection
David J. Eckman, Matthew Plumlee, and Barry L. Nelson (Northwestern University)
Abstract Abstract
In the subset-selection approach to ranking and selection, a decision-maker seeks a subset of simulated systems that contains the best with high probability. We present a new, generalized framework for constructing these subsets and demonstrate that some existing subset-selection procedures are situated within this framework. The subsets are built by calculating, for each system, a minimum standardized discrepancy between the observed performances and the space of problem instances for which that system is the best. A system's minimum standardized discrepancy is then compared to a cutoff to determine whether the system is included in the subset. We examine the problem of finding the tightest statistically valid cutoff for each system and draw connections between our approach and other subset-selection methodologies. Simulation experiments demonstrate how the screening power and subset size are affected by the choice of standardized discrepancy.
pdf
Sequential Sampling for a Ranking and Selection Problem with Exponential Sampling Distributions
Gongbo Zhang, Haidong Li, and Yijie Peng (Peking University)
Abstract Abstract
We study a ranking and selection problem with exponential sampling distributions. Under a Bayesian framework, we derive the posterior distribution of a performance parameter, and provide a normal approximation for the posterior distribution based on a central limit theorem to efficiently learn the performance parameter. We formulate dynamic sampling decision as a stochastic control problem, and propose a sequential sampling procedure, which maximizes a value function approximation one-step ahead and is proved to be consistent. Numerical results demonstrate the efficiency of the proposed method.
pdf
Simulation Optimization
Metamodel-based Optimization
Chair: Loo Hay LEE (National University of Singapore)
A Hybrid of Shrinking Ball Method and Optimal Large Deviation Rate Estimation in Continuous Contextual Simulation Optimization with Single Observation
Xiao Jin, Yichi Shen, Loo Hay Lee, Ek Peng Chew, and Christine Ann Shoemaker (National University of Singapore)
Abstract Abstract
We propose a new method for solving continuous contextual simulation optimization with a single observation. By adopting the estimation on the large deviation rate in the contextual ranking and selection problem, we transfer the old theorem to the continuous setting using a shrinking ball inspired construct. Through the estimation of the rate, the new method is expected to achieve the optimal performance in this new problem setting. Brief numerical experiments are conducted and show significant advantages of our method against the uniform sampling scheme.
pdf
Ordinal Optimization with Generalized Linear Model
Dohyun Ahn (The Chinese University of Hong Kong) and Dongwook Shin (HKUST)
Abstract Abstract
Given a number of stochastic systems, we consider an ordinal optimization problem to find an optimal allocation of a finite sampling budget, which maximizes the likelihood of selecting the "best" system, where the "best" is defined as the one with the highest mean. The statistical characteristics of each system are described by the generalized linear model, where unknown parameters are estimated using maximum likelihood estimation. To formulate the problem in a tractable form, we use the large deviations theory to characterize the structural properties of the optimal allocation. Further, motivated by Euclidean information theory, we obtain an approximate solution for the optimal allocation, which is leveraged to design a sampling strategy that is near-optimal in a suitable asymptotic sense. The proposed sampling strategy is computationally tractable, and we show via numerical testing that it performs competitively even in the presence of model misspecification.
pdf
Global Optimization for Noisy Expensive Black-Box Multi-Modal Functions via Radial Basis Function Surrogate
Yichi Shen and Christine A. Shoemaker (National University of Singapore)
Abstract Abstract
This study proposes a new surrogate global optimization algorithm that solves problems with expensive black-box multi-modal objective functions subject to homogeneous evaluation noise. Specifically, we propose a new radial basis function (RBF) surrogate to approximate noisy functions and extend the Stochastic Response Surface method (Regis and Shoemaker 2007), which was developed for deterministic problems, to optimize noisy functions. Instead of conducting multiple replications at each point to mitigate the influence of noise, we only do single observation at every sampled point and regularize the RBF surrogate by penalizing the bumpiness. The proposed algorithm sequentially identifies a new point for the expensive function evaluation from a set of randomly generated candidate points in consideration of exploitation and exploration. Numerical studies show that the proposed noisy RBF surrogate can produce reliable approximations for noisy functions, and the proposed algorithm is effective and competitive in solving the global optimization problems with noisy evaluations.
pdf
Simulation Optimization
Applications I
Chair: Amos Ng (University of Skövde)
Real-time Decision Making for a Car Manufacturing Process Using Deep Reinforcement Learning
Timo P. Gros, Joschka Groß, and Verena Wolf (Saarland University)
Abstract Abstract
Computer simulations of manufacturing processes are in widespread use for optimizing production planning and order processing. If unforeseeable events are common, real-time decisions are necessary to maximize the performance of the manufacturing process. Pre-trained AI-based decision support offers promising opportunities for such time-critical production processes.
Here, we explore the effectiveness of deep reinforcement learning for real-time decision making in a car manufacturing process. We combine a simulation model of a central production part, the line buffer, with deep reinforcement learning algorithms, in particular with deep Q-Learning and Monte Carlo tree search.
We simulate two different versions of the buffer, a single-agent and a multi-agent one, to generate large amounts of data and train neural networks to represent near-optimal strategies. Our results show that deep reinforcement learning performs extremely well and the resulting strategies provide near-optimal decisions in real-time, while alternative approaches are either slow or give strategies of poor quality.
pdf
Aircraft Assembly Ramp-up Planning using a Hybrid Simulation-Optimization Approach
Amos H.C. Ng, Jacob Bernedixen, and Martin Andersson (University of Skövde, Evoma AB); Sunith Bandaru (University of Skövde); and Thomas Lezama (Airbus Group)
Abstract Abstract
Assembly processes have the most influencing and long-term impact on the production volume and cost in the aerospace industry. One of the most crucial factors in aircraft assembly lines design during the conceptual design phase is ramp-up planning that synchronizes the production rates at the globally dispersed facilities. Inspired by a pilot study performed with an aerospace company, this paper introduces a hybrid simulation-optimization approach for addressing an assembly production chain ramp-up problem that takes into account: (1) the interdependencies of the ramp-up profiles between final assembly lines and its upstream lines; (2) workforce planning with various learning curves; (3) inter-plant buffer and lead-time optimization, in the problem formulation. The approach supports the optimization of the ramp-up profile that minimizes the times the aircraft assemblies stay in the buffers and simultaneously attains zero backlog. It also generates the required simulation-optimization data for supporting the decision-making activities in the industrialization projects.
pdf
Integration of Deep Reinforcement Learning and Discrete-Event Simulation for Real-Time Scheduling of a Flexible Job Shop Production
Sebastian Lang (Fraunhofer Institute for Factory Operation and Automation IFF, Otto von Guericke University Magdeburg); Nico Lanzerath (Otto von Guericke University Magdeburg); Tobias Reggelin (Otto von Guericke University Magdeburg, Fraunhofer Institute for Factory Operation and Automation IFF); Marcel Müller (Otto von Guericke University Magdeburg); and Fabian Behrendt (Fraunhofer Institute for Factory Operation and Automation IFF, SRH Mobile University)
Abstract Abstract
The following paper presents the application of Deep Q-Networks (DQN) for solving a flexible job shop problem with integrated process planning. DQN is a deep reinforcement learning algorithm, which aims to train an agent to perform a specific task. In particular, we train two DQN agents in connection with a discrete-event simulation model of the problem, where one agent is responsible for the selection of operation sequences, while the other allocates jobs to machines. We compare the performance of DQN with the GRASP algorithm of Rajkumar et al. (2010). After less than one hour of training, DQN generates schedules providing a lower makespan and total tardiness as the GRASP algorithm. Our first investigations reveal that DQN seems to generalize the training data to other problem cases. Once trained, the prediction and evaluation of new production schedules requires less than 0.2 seconds.
pdf
Simulation Optimization
Applications II
Chair: Enlu Zhou (Georgia Institute of Technology)
Simulation-based Replacement Line and Headway Optimization
David Schmaranzer, Alexander Kiefer, Roland Braune, and Karl F. Doerner (University of Vienna)
Abstract Abstract
We present a study on simulation-based replacement line and headway optimization for the Viennese public transportation system. The discussed problem focuses on scheduled closures of subway lines. A genetic algorithm is proposed to design replacement lines and potentially adjust the headways of all lines in the network. Candidate networks are simulated to evaluate their solution quality. The underlying discrete event simulation model has several stochastic elements (e.g., vehicle travel and turning maneuver times). Passenger creation is a Poisson process which uses daily origin destination matrices based on anonymous mobile phone and count data. Vehicles are subject to capacity restrictions. Computational insights are gained from three real-world based test instance. Our problem-specific genetic algorithm creates not only good but also robust solutions by taking stochastic elements into account.
pdf
Multiobjective Optimization of the Variability of the High-Performance LINPACK Solver
Tyler H. Chang and Jeffrey Larson (Argonne National Laboratory) and Layne T. Watson (Virginia Polytechnic Institute and State University)
Abstract Abstract
Variability in the execution time of computing tasks can cause load imbalance in high-performance computing (HPC) systems. When configuring system- and application-level parameters, engineers traditionally seek configurations that will maximize the mean computational throughput. In an HPC setting, however, high-throughput configurations that do not account for performance variability could result in poor load balancing. In order to determine the effects of performance variance on computationally expensive numerical simulations, the High-Performance LINPACK solver is optimized by using multiobjective optimization to maximize the mean and minimize the standard deviation of the computational throughput on the High-Performance LINPACK benchmark. We show that specific configurations of the solver can be used to control for variability at a small sacrifice in mean throughput. We also identify configurations that result in a relatively high mean throughput, but also result in a high throughput variability.
pdf
A Nested Simulation Optimization Approach for Portfolio Selection
Yifan Lin and Enlu Zhou (Georgia Institute of Technology) and Aly Megahed (IBM)
Abstract Abstract
We consider the problem of portfolio selection with risk factors, where the goal is to select the portfolio position that minimizes the value at risk (VaR) of the expected portfolio loss. The problem is computationally challenging due to the nested structure caused by the risk measure VaR of the conditional expectation, along with the optimization over a discrete and finite solution space. We develop a nested simulation optimization approach to solve this problem. In the outer layer, we adapt the optimal computing budget allocation (OCBA) approach to sequentially allocate the simulation budget of the outer-layer to different portfolio positions. In the inner layer, we propose a new sequential procedure to efficiently estimate the VaR of the expected loss. We present a numerical example that shows that our approach achieves a higher probability of correct selection under the same computing budget compared to three other methods.
pdf
Using Simulation to Innovate
Track Coordinator - Using Simulation to Innovate: Ben Feng (University of Waterloo), Loo Hay Lee (National University of Singapore), Simon J. E. Taylor (Brunel University London)
Using Simulation to Innovate
Innovative Applications in Simulation
Chair: Loo Hay LEE (National University of Singapore)
Sustainable Catastrophic Cyber-risk Management in IoT Societies
Ranjan Pal, Ziyuan Huang, Xinlong Yin, and Mingyan Liu (University of Michigan); Sergey Lototsky (University of Southern California); and Jon Crowcroft (University of Cambridge)
Abstract Abstract
IoT-driven smart cities are popular service-networked ecosystems, whose proper functioning is hugely based on digitally secure and reliable supply chain relationships. However, the naivety in the current security efforts by concerned parties to protect IoT devices, pose tough challenges to scalable and expanding cyber-risk management markets for IoT societies, post a systemic cyber-catastrophe. As firms increasingly turn to cyber-insurance for reliable risk management, and insurers turn to reinsurance for their own risk management, questions arise as to how modern-day cyber risks aggregate and accumulate, and whether reinsurance is a feasible model for reliable catastrophic risk management and transfer in smart cities. In this introductory effort, we analyze (a) whether traditional cyber-risk spreading is a sustainable risk management practice and (b) under what conditions, for the quite conservative scenario when proportions of i.i.d. catastrophic cyber-risks of a significant heavy-tailed nature are aggregated by a cyber-risk manager.
pdf
A Combined Classical Molecular Dynamics Simulations and Ab Initio Calculations Approach to Study A-Si:H/C-Si Interfaces
Francesco Buonocore (ENEA), Pablo Luis Garcia-Muller (CIEMAT), Simone Giusepponi and Massimo Celino (ENEA), and Rafael Mayo-Garcia (CIEMAT)
Abstract Abstract
In the silicon heterojunction solar cells, intrinsic hydrogenated amorphous silicon a-Si:H is used to passivate the crystal silicon c-Si surface to suppress the electrical losses at interfaces and to keep ultralow contact resistivity for the selective transport of one type of carrier only. We use ReaxFF (Reactive Force Field) molecular dynamics to efficiently simulate the thermalisation, quenching, and equilibration processes involving thousands of atoms forming realistic a-Si:H/c-Si interface structures. We generated snapshots of the equilibrated c-Si/a-Si:H interface atom configurations at room temperature. The ab initio characterization has been executed on selected configurations to monitor the electronic properties of the c-Si/a-Si:H interface. The evolution of the intragap states is monitored by analyzing density of states and charge density. This all will allow to design more efficient silicon solar cells belonging to the silicon heterojunction technology.
pdf
Analysis of Layout Impacts on Resource Allocation for Voting: A Los Angeles Vote Center
Nicholas Dominic Bernardo (University of Rhode Island), Jennifer Lather (University of Nebraska-Lincoln), and Gretchen A. Macht (University of Rhode Island)
Abstract Abstract
The overlap of facilities layout planning and resource allocation models are relatively new and untested in election administration. Depending on the jurisdiction, Election Day(s) in-person voting location layouts are either planned, suggested with rough drawings, or arranged election officials at the time of set up. This study aims to build better analytical options for election administrators prior to election day. A vote center in Los Angeles County, California, during their 2020 Presidential Primary was investigated with discrete-event simulation to determine differences in performance based on layout and operational changes. The results indicate that by separating the processing of provisional voters at check-in, a significant reduction in the amount of time that voters spend in the vote center can be realized. This finding indicates the potential benefit of additional innovation and research investigating the relationship between facility layout and resource allocation for improved voter routing methods and polling location performance.
pdf
Using Simulation to Innovate
Innovative Modeling Techniques and Platforms in Simulation
Chair: Simon J. E. Taylor (Brunel University London)
Hybrid Modelling and Simulation (M&S): Driving Innovation in the Theory and Practice of M&S
Navonil Mustafee and Alison Harper (University of Exeter) and Stephan Onggo (University of Southampton)
Abstract Abstract
Hybrid Simulation (HS) is the application of two or more simulation techniques (e.g., ABS, DES, SD) in a single M&S study. Distinct from HS, Hybrid Modelling (HM) is defined as the combined application of simulation approaches (including HS) with methods and techniques from the broader OR/MS literature and also across disciplines. In this paper, we expand on the unified conceptual representation and classification of hybrid M&S, which includes both HS (Model Types A-C), hybrid OR/MS models (D, D.1) and cross-disciplinary hybrid models (Type E), and assess their innovation potential. We argue that model types associated with HM (D, D.1, E), with its focus on OR/MS and cross-disciplinary research, are particularly well-placed in driving innovation in the theory and practice of M&S. Application of these innovative HM methodologies will lead to innovation in the application space as new approaches in stakeholder engagement, conceptual modelling, system representation, V&V, experimentation, etc. are identified.
pdf
Capturing Miner and Mining Pool Decisions in a Bitcoin Blockchain Network: A Two-Layer Simulation Model
Kejun Li, Yunan Liu, Hong Wan, and Ling Zhang (North Carolina State University)
Abstract Abstract
Motivated by the growing interests in Bitcoin blockchain technology, we build a Monte-Carlo simulation model to study the miners' and mining pool managers' decisions in the Bitcoin blockchain network. Our simulation model aims to capture the dynamics of participants of these two different parties and how their decisions collectively affect the system dynamics. Given the limited amount of monetary budget and mining power capacity, individual miners decide on which mining pools to join and determine how much hashing power to invest. Mining pool managers need to determine how to appropriately allocate the mining reward and how to adjust the membership fee. In addition to the aforementioned miner and pool behavior, we also characterize the system-level dynamics of the blockchain in terms of mining difficulty level and total hashing power.
pdf
Innovations in Simulation: Experiences with Cloud-based Simulation Experimentation
Simon J. E. Taylor, Anastasia Anagnostou, and Nura Tijjani Abubakar (Brunel University London); Tamas Kiss, James DesLauriers, and Gabor Terstyanszky (University of Westminster); Peter Kacsuk and Jozsef Kovacs (Institute for Computer Science and Control); and Shane Kite, Gary Pattison, and James Petry (Saker Solutions Ltd)
Abstract Abstract
The amount of simulation experimentation that can be performed in a project can be restricted by time, especially if a model takes a long time to simulate and many replications are required. Cloud Computing presents an attractive proposition to speeding up, or extending, simulation experimentation as computing resources can be hired on demand rather than having to invest in costly infrastructure. However, it is not common practice for simulation users to take advantage of this and, arguably, rather than speeding up simulation experimentation users tend to make compromises by using unnecessary model simplification techniques. This may be due to a lack of awareness of what Cloud Computing can offer. Based on several years’ experience of innovation in this area, this article presents our experiences in developing Cloud Computing applications for simulation experimentation and discusses what future innovations might be created for the widespread benefit of our simulation community.
pdf
Using Simulation to Innovate
Digital Twins and Innovative Simulations in Industry
Chair: Wei Xie (Northeastern University)
Simulation-based Evaluation of Handover Mechanisms in High-Speed Railway Control and Communication Systems
Xin Liu and Dong Jin (Illinois Institute of Technology) and Tairan Zhang (CRRC Zhuzhou Institute Co., Ltd)
Abstract Abstract
High-speed rail transit systems are becoming one of the major public transportation services connecting many modern cities. The development of automated train control systems plays a crucial role in smart city design and realization. However, the train-to-ground wireless communication network faces challenges due to the high-velocity nature of the railway system, such as the increased probability of handover failures. Research efforts have been made to improve the handover mechanism of LTE-based railway communication protocols, but most solutions are developed and evaluated under the assumption of an ideal linear topology of wireless stations along train lines. In this work, we construct a high-fidelity simulation model based on a real-world measurement dataset. We also implement multiple proposed handover mechanisms and conduct a simulation-based comparative study of them in terms of handover quality and network performance.
pdf
Framework of O2DES.NET Digital Twins for Next Generation Ports and Warehouse Solutions
Haobin Li, Xinhu Cao, Pankaj Sharma, Loo Hay Lee, and Ek Peng Chew (National University of Singapore)
Abstract Abstract
Innovative solutions are proposed to meet the challenges bringing by the development of next generation ports and warehouses. In order to test the feasibility of the solutions and increase the “wisdom” of maritime logistics facilities, a maritime Digital Twin (DT), SingaPort.Net Suite, is developed and presented in this paper. An object-oriented discrete event simulation (O2DES.NET) framework is proposed as the development framework. Seven functions achieved by O2DES.NET are embedded in the SingaPort.Net Suite and are illustrated in this paper. It is hoped that with the help of SingaPort.Net Suite and O2DES.NET, the maritime logistics DT ecosystem could experience a vivid and healthy growth in the future decades.
pdf
Simulation-based Digital Twin Development for Blockchain Enabled End-to-end Industrial Hemp Supply Chain Risk Management
Wei Xie (Northeastern University)
Abstract Abstract
With the passage of the 2018 U.S. Farm Bill, Industrial Hemp production is moved from limited pilot programs to a regulated agriculture production system. However, Industrial Hemp Supply Chain (IHSC) faces critical challenges, including: high complexity and variability, very limited production knowledge, lack of data/information tracking. In this paper, we propose blockchain-enabled IHSC and develop a preliminary simulation-based digital twin for this distributed cyber-physical system (CPS) to support the process learning and risk management. Basically, we develop a two-layer blockchain with proof of authority smart contract, which can track the data and key information, improve the supply chain transparency, and leverage local authorities and state regulators to ensure the quality control verification. Then, we introduce a simulation-based digital twin for IHSC management, which can characterize the process spatial-temporal causal interdependencies and dynamic evolution to guide risk control and decision making. Our empirical study demonstrates the promising performance of proposed platform.
pdf
Track Coordinator - Vendor Tracks: Miguel Mujica Mota (Amsterdam University of Applied Sciences), Edward Williams (PMC)
Vendor
Introduction to MOSIMTEC and Simio
Chair: Amy Greer (MOSIMTEC, LLC); Martin Franklin (MOSIMTEC, LLC); Ryan Welch Luttrell (Simio LLC); Caleb Whitehead (Simio LLC)
Simulation Software Selection Criteria
Amy Greer and Martin Franklin (MOSIMTEC, LLC)
Abstract Abstract
“What is the best simulation package?” is a question MOSIMTEC consultants are often faced with. However, there is no single right answer. Many packages excel in certain areas, making them the best for specific use cases. At MOSIMTEC our consultants have a long work history with many of the major simulation packages in use today. Our ability to use the best simulation tool for a particular project is a key advantage. We are not bound to only having one simulation package in our toolbox. This brief presentation will discuss software characteristics companies should consider when selecting a simulation platform. The last half of the presentation will be dedicated to a panel discussion with MOSITMEC consultants with professional experience in a variety of simulation platforms.
pdf
Introduction to Simio
Ryan Welch Luttrell and Caleb Whitehead (Simio LLC)
Abstract Abstract
This paper describes the Simio modeling system that is designed to simplify model building by promoting a modeling paradigm shift from the process orientation to an object orientation. Simio is a simulation modeling framework based on intelligent objects. The intelligent objects are built by modelers and then may be reused in multiple modeling projects. Although the Simio framework is focused on object-based modeling, it also supports a seamless use of multiple modeling paradigms including event, process, object, systems dynamics, agent-based modeling, and Risk-based Planning and scheduling (RPS).
pdf
Vendor
Applied Materials' Factory Performance & Simio in Industry 4.0
Chair: Stephen Mulvey (Applied Materials); Gerrit Zaayman (Simio LLC); Adam Sneath (Simio LLC)
Factory Performance Productivity Suite by Applied Materials
Stephen Mulvey (Applied Materials)
Abstract Abstract
Come learn about how Applied Materials Factory Productivity software utilizes the latest technologies and approaches including simulation, optimization and real-time data integration to solve complex manufacturing challenges. Applied Materials has developed solutions for planning, scheduling and prediction that integrate simulation as a key component of the manufacturing solution. These advances have made Applied Materials Productivity Suite one of the most widely used software solutions for the high-tech industry worldwide. Come see why the top high-tech companies rely on Applied Materials solutions to deliver their results.
pdf
The Application of Simio Scheduling in Industry 4.0
Gerrit Zaayman and Adam Sneath (Simio LLC)
Abstract Abstract
Simulation has traditionally been applied in system design projects where the basic objective is to evaluate alternatives and predict and improve the long term system performance. In this role, simulation has become a standard business tool with many documented success stories. Beyond these traditional system design applications, simulation can also play a powerful role in scheduling by predicting and improving the short term performance of a system. In the manufacturing context, the major new trend is towards digitally connected factories that introduce a number of unique requirements which traditional simulation tools do not address. Simio has been designed from the ground up with a focus on both traditional applications as well as advanced scheduling, with the basic idea that a single Simio model can serve both purposes.
pdf
Vendor
Advanced AnyLogic and Pathmind's Reinforcement Learning
Chair: Edward Junprung (Pathmind.Inc); Andrei V. Borshchev (The AnyLogic Company); Mohammed Farhan (The University of Texas at Arlington); Brett Göhre (Pathmind.Inc)
AnyLogic9 - Preview of the Next Generation World Class Simulation Software
Andrei Borshchev (The AnyLogic Company)
Abstract Abstract
AnyLogic is the technology leader and the most popular tool for building simulation models for business applications. The AnyLogic ecosystem includes a flexible core, a number of vertical solutions for material handling, supply chains, rail and road transportation, pedestrian modeling, etc., and AnyLogic Cloud – a highly scalable execution environment for AnyLogic models. We will do a quick overview of AnyLogic modeling technology and then demonstrate the alpha version of AnyLogic 9 – a new generation of modeling software with a fully functional web interface, cloud, server, and desktop installation, teamwork support, a choice of multiple scripting languages, and other new exciting features.
pdf
Reinforcement Learning In AnyLogic Simulation Models: A Guiding Example Using Pathmind
Mohammed Farhan (The University of Texas at Arlington) and Brett Göhre and Edward Junprung (Pathmind.Inc)
Abstract Abstract
Reinforcement Learning has recently gained a lot of exposure in the simulation industry. In this paper, we demonstrate the use of reinforcement learning in AnyLogic software models using Pathmind. A coffee shop simulation is built to train a barista to make correct operational decisions and improve efficiency that directly affects customer service time. The trained policy outperforms rule-based functions in terms of customer service time and throughput.
pdf
Vendor
FlexSim for Industry 4.0 and Springer Publishing
Chair: Bill Nordgren (FlexSim Software Products, Inc.); Wayne Wheeler (Springer)
FlexSim 2021: Advanced Simulation Modeling for the Industry 4.0 Era
Bill Nordgren (FlexSim Software Products, Inc.)
Abstract Abstract
Join us for a demonstration of the latest simulation modeling and analysis capabilities in FlexSim 2021. Significant recent developments have enabled our users to execute even the most complex digital twin projects, and we're excited to show several examples of how FlexSim is being used as a pillar of Industry 4.0 initiatives. Combined with our recent advancements in agent-based modeling and powerful tools for warehousing and manufacturing, you'll discover why FlexSim is the complete package for any simulation modeling application.
pdf
How Publishing with Springer Advances Your Research!
Wayne Wheeler (Springer)
Abstract Abstract
As part of Springer Nature, Springer advances discovery by providing optimal service to the Simulation and Modelling community, including with high-quality academic and professional book titles in our unique-to-the-industry series, Simulation Foundations, Methods and Applications: https://www.springer.com/series/10128 . The research we publish there and in numerous supportive publications is significant, robust and reaches all relevant audiences in the best possible format so it can be readily discovered, accessed, used, re-used and shared. We publish many series and non-series book titles, and numerous relevant journals, including a number of fully open-access journals. Our book and eBook portfolio comprises conference proceedings, book series, textbooks and major reference works from distinguished authors. WSC attendees receive a 20% off books and ebooks titles for a limited time.
pdf
Vendor
Introduction to AnyLogic and Arena
Chair: Alexander Rakulenko (The AnyLogic Company); Nancy Zupick (Rockwell Automation Inc.); Gregory Monakhov (The AnyLogic Company); Melanie Barker (Rockwell Automation Inc.)
AnyLogic Cloud: An Integrated Environment for the Entire Model Lifecycle
Alexander Rakulenko and Gregory Monakhov (The AnyLogic Company)
Abstract Abstract
AnyLogic Cloud is a family of products that changes the way how models live. To play the model you can simply open the model page in the Cloud using a web browser. To share the model with colleagues and clients you can send them a link to the model or embed the model in your web site. Model version management and model results storage make sure you do not waste your time on searching for right experiment and model version, or playing the same experiment twice with no reason. Built-in configurable dashboards, downloadable results, RESTful API for getting direct access to the database with results provide smooth ways for working with the data for different purposes: from demonstrating the results in a meeting, exporting them to business analytics or enterprise software, to reinforcement and machine learning, AI training, integration with custom solvers, algorithms, optimization engines etc.
pdf
Arena Simulation the Next Generation
Nancy Zupick and Melanie Barker (Rockwell Automation Inc.)
Abstract Abstract
Arena Simulation has just unveiled its latest and greatest update; a brand new interface. We’ve maintained the same look and feel you’ve learned all of these years, while incorporating many tools and layouts to enhance the usability of our software. Also included are new features that enhance the speed of your simulation such as multi-core technology. We invite you to take a tour with our software experts as we do a full review of Arena’s new interface and other tools that continue to make us the fastest option available.
pdf
Track Coordinator - Simulation Education: Saikou Diallo (Virginia Modeling, Analysis and Simulation Center, VMASC)
Simulation Education
Learning Environments
Chair: Andrew J. Collins (Old Dominion University)
Learning Environment for Introduction in Discrete-Event Simulation for Design and Improvement of New and Existing Material Flow Systems
Bastian Clemens Schumacher (Technische Universität Berlin) and Holger Kohl (Technische Universität Berlin, Fraunhofer Institute for Production Systems and Design Technology IPK)
Abstract Abstract
In this paper, ways are shown how students can be familiarized with executing simulation studies for the design and improvement of new and existing material flow systems using flexible discrete-event simulation (DES) tools. The prototypical app “Production Simulation Application” is described. It combines learning-conducive components that are used to familiarize users with objects, the graphical model buildup, and the use of programming language. Game elements such as levels, badges, and points are shaped to motivate learners to interact frequently. They enable immediate feedback. A test shows that the app has been used repeatedly at short intervals beyond the course. A procedure for experience-based learning for conducting simulation studies is developed, in which a so-called learning factory enables learners to complete a simulation study. It is shown that the developments can contribute to the dissemination of DES and to increasing the planning quality in times of rising complexity of production systems.
pdf
Education in Analytics Needed for the Modeling & Simulation Process
James F. Leathrum, Andrew J. Collins, T. Steven Cotter, Christopher J. Lynch, and Ross Gore (Old Dominion University)
Abstract Abstract
The increase in the availability of data has led to organizations asking how to use that data to understand their processes and to plan for their future. This paper discusses integrating analytics with modeling and simulation in a sequence of courses intended to provide organizations the ability to utilize their data to make better-informed decisions. Classical modeling and simulation education has utilized simple statistical techniques to address input data modeling and output analysis. Our premise is that a solid background in analytics provides an analyst not only with the ability to understand the data better but also the ability to improve the quality of a simulation’s data inputs and the ability to analyze the increasing amount of data generated by simulations. The paper proposes a set of elements of analytics and presents how they can impact a basic set of modeling and simulation project activities.
pdf
Teaching Simulation to Generation Z Engineering Students: Lessons Learned from a Flipped Classroom Pilot Study
Michelle M. Alvarado, Katie Basinger, Behshad Lahijanian, and Diego Alvarado (University of Florida)
Abstract Abstract
Simulation has been a long-time staple for industrial and systems engineering programs. Today's engineering students known as Generation Z (Gen-Z) exhibit specific generational traits including short attention spans and multi-tasking on multiple digital screens. These Gen-Z preferences have created new challenges and opportunities for delivery of simulation methods and training in a classroom. In this paper, we present motivating factors and success in converting a software-intensive undergraduate-level simulation course to a flipped classroom setting. Furthermore, we present lessons learned from a multi-semester pilot study that investigated the impact of video length in lecture-length 40-50 minute vs. short-length 10-12 minute videos. Surprisingly, Gen-Z students preferred the lecture-length videos. This paper shares our experience in transitioning from traditional to flipped classroom lectures for teaching simulation to Gen-Z engineering students. We share student feedback about the transition process, key tips for communicating expectations and deliverables, and how to minimize video burnout.
pdf
Simulation Education
Teaching Simulation and Analytics Using R
Chair: Andrew J. Collins (Old Dominion University)
Animation for Simulation Education in R
Vadim Kudlay and Barry Lawson (University of Richmond) and Lawrence M. Leemis (William & Mary)
Abstract Abstract
R is freely-available software for statistical computing, providing a variety of statistical analysis functionality. In prior work, we introduced and released the simEd package for R, focusing on functions for generating discrete and continuous variates via inversion, for extensible single- and multiple-server queueing simulation, and including real-world data sets for input modeling and analysis. In this current work, we significantly enhance and extend the simEd package, primarily through the introduction of a variety of animation and visualization utilities intended to aid in simulation education. These include animations of event-driven simulation details for a single-server queueing model, of random-variate generation for a variety of distributions, of a Lehmer random number generator, of variate generation via acceptance-rejection, and of generating a non-homogeneous arrival process via thinning.
pdf
Teaching Risk Analytics Using R
Ravi Doddavaram and Canan Gunes Corlu (Boston University)
Abstract Abstract
We discuss our experience with using R, which is a free software that is particularly suitable for computer simulation, in a risk analytics course offered to students having different experience levels and technical sophistication. We highlight relevant packages in R that we found useful for the purposes of input modeling and stochastic optimization along with several examples. We also introduce a new package that allows the simulation of decision trees. Student feedback regarding the use of R in comparison to a spreadsheet-based platform has been very positive.
pdf
Track Coordinator - Ph.D. Colloquium: Weiwei Chen (Rutgers University), Anatoli Djanatliev (University of Erlangen-Nuremberg), Jose J. Padilla (VMASC, Old Dominion University), Chang-Han Rhee (Northwestern University)
PhD Colloquium
PhD Colloquium
Chair: Jose J. Padilla (Old Dominion University, VMASC)
We Are All in It Together: Training the Next Generation of Model Thinkers
Saikou Diallo (Old Dominion University)
Abstract Abstract
We need to train the next generation of scientists, modelers and analysts to be transdisciplinary collaborators, technically competent and socially aware. Model thinking and Modeling and Simulation can play an important if it broadens its horizons beyond socio-technical problems and tackles issues that are important to the human condition. In this talk, we discuss transdisciplinarity, inclusion and social awareness as three pillars that we can build on to stay relevant in a world where technological change is outpacing our ability to manage and predict its impact on key aspects of life and society. We present tools, models and practical examples to illustrate the importance of each pillar and provide a basis for what we hope is a strong debate within the scientific community.
pdf
Dynamically Adjusting Sequencing Rules in a Complex Manufacturing System with Uncertainty
Thomas Voß (Leuphana Universität)
Abstract Abstract
Especially in complex manufacturing systems and uncertain conditions, sequencing operations in a machines' queue can pose a difficult problem. Decentral approaches have proven useful, often the usage of sequencing rules has been a viable option. Still, no rule can outperform all other rules under varying system performance. For that reason, reinforcement learning (RL) is used as a hyper-heuristic to dynamically select and adjust sequencing rules based on system status. The trained agent is tested in a complex manufacturing system with uncertainty and evaluated under various conditions matching and outperforming the best sequencing rules found in a preliminary simulation study. Given an unknown scenario, it has to be evaluated if the RL hyper-heuristic is also able to change the sequencing rule suitable for the scenario, providing a robust performance.
pdf
A Flexible Parallelized Reversible Jump Markov Chain Monte Carlo Method
John T. Chavis III (Cornell University)
Abstract Abstract
Reversible jump Markov chain Monte Carlo (RJMCMC) is a powerful Bayesian trans-dimensional algorithm for performing model selection while inferring the distribution of model parameters. Despite the general applicability of this trans-dimensional sampler, there remains a question about the degree to which the resulting Markov Chains have converged and can provide accurate samples from the desired stationary distribution. The present work introduces a parallel RJMCMC implementation that aims to increase the accessibility of RJMCMC to practitioners and to help assess the accuracy and convergence of Markov Chains generated from applying RJMCMC to real data.
pdf
Generation of Data for Artificial Intelligence Applications in the Building Sector
Kristin Majetta (Fraunhofer IIS EAS)
Abstract Abstract
This PhD Colloquium contribution shows a way to generate data for training Artificial Neural Networks for the building sector. Basis is a simulation study of different room and room controller models. Based on those models, parameter variations including optimization of the controller parameters are done by the tool GridWorker.
pdf
A Framework for the Simulation of Tomorrow’s Mobility
Moritz Gütlein (FAU Erlangen-Nuremberg)
Abstract Abstract
The mobility sector is broad and linked to various other domains such as communication, energy, or society in general. Simulation can help to invent, test, and evaluate solutions for cross-domain mobility problems. However, the modeling and simulation of elaborate scenarios brings many challenges in itself. Therefore, a framework is proposed that can be utilized by different kinds of users in order to design and run mobility simulations.
pdf
Sample-Path Algorithm for Global Optimal Solution of Resource Allocation in Queueing Systems with Performance Constraints
Mengyi Zhang (Politecnico di Milano)
Abstract Abstract
Resource allocation problems with performance constraints (RAP–PC) are a category of optimization problems on queueing system design. They can be often found in operations management of manufacturing and service systems. RAP–PC aims at finding the system with the minimum cost while guaranteeing a target performance, which usually must be obtained by simulation due to complexity of practical systems. It proposes an algorithm providing a sample–path exact solution within finite time. Specifically, the algorithm works on the mathematical programming model of RAP–PC and uses logic–based exact and gradient–based approximate feasibility cuts to define and reduce the feasible region. Results show that the proposed approach can solve at optimality problems on lines with up to 9–stage within two hours and feasible good quality solutions can be found faster than the state–of–the–art algorithm.
pdf
Dynamic Sampling for Risk Minimization in Semiconductor Manufacturing
Etienne Le Quere (Mines Saint-Étienne, Univ Clermont Auvergne; Mines Saint-Étienne)
Abstract Abstract
To control the quality of their processes, manufacturers perform measurement operations on their products. In semiconductor manufacturing, measurement capacity is limited because metrology tools are expensive, thus only a limited number of lots of products can be measured. Selecting the set of lots to control to minimize risk is called sampling. This work studies the problem of optimizing the sampling of lots to minimize the number of wafers at risk on production machines in semiconductor manufacturing.
pdf
Dynamic Data-driven Simulation-based Decision Support System for Medical Procedures
Saurabh Jain (The University of Arizona)
Abstract Abstract
Medical procedures require high precision and accuracy under circumstances involving time and situational uncertainties. Hence, the caregivers are to possess higher cognitive and technical skills to perform complex operations in a timely and appropriate manner. Thus, it poses a need to provide a system that can provide effective training, personalized objective assessment, and appropriate decision support to make rapid progress throughout the process of overcoming steep learning curves. This work proposes a dynamic data-driven simulation-based decision support system for medical procedures, which can address aforementioned challenges. We aim to address three key research objectives: (a) design and develop a high-fidelity physics-based simulation environment, (b) provide online proficiency assessment to generate real-time personalized feedback for trainees, and (c) formalize the taxonomy for representing behaviors of caregivers under time and situational uncertainties. The proposed work has been effectively implemented and validated using numerous experiments comprising expert and novice users.
pdf
Simulation Optimization by Reusing Past Replications: Don't Be Afraid of Dependence
Tianyi Liu (Georgia Institute of Technology)
Abstract Abstract
The main challenge of simulation optimization is the limited simulation budget because of the high computational cost of simulation experiments. One approach to overcome this challenge is to reuse simulation outputs from previous iterations in the current iteration of the optimization procedure. However, due to the dependence among iterations, simulation replications from different iterations are not independent, which leads to the lack of theoretical justification for the good empirical performance. In this paper, we fill this gap by theoretically studying the stochastic gradient descent method with reusing past simulation replications. We show that reusing past replications does not change the convergence of the algorithm, which implies the bias of the gradient estimator is asymptotically negligible. Moreover, we show that reusing past replications reduces the conditional variance of gradient estimators, which implies that the algorithm can use larger step size sequences to achieve faster convergence.
pdf
Investigating Cloud-based Distributed Simulation (CBDS) for Large- Scale Systems
Nura Tijjani Abubakar (Brunel University London; Jigawa State Institute of IT, Kazaure)
Abstract Abstract
Distributed Simulation (DS) allows new or existing models to be composed together and form a larger model, which can run on geographically distributed locations. The High-Level Architecture (HLA) is one of the established standards used to run a DS. Cloud computing offers network resources that are beneficial to DS. However, combining these concepts in simulation to speedup experiments can be challenging. This paper proposes an approach to compose and execute large-scale Cloud-Based Distributed Simulation (CBDS). An Emergency Medical Service (EMS) model was used as a proof of concept, and the initial performance test results are presented as a work-in-progress.
pdf
Sensitivity Analysis of Arc Criticalities in Stochastic Activity Networks
Peng Wan (The University of Maryland at College Park)
Abstract Abstract
Using Monte Carlo simulation, this paper proposes a new algorithm for estimating the arc criticalities of stochastic activity networks. The algorithm is based on the following result: given the length of all arcs in a network except for the one arc of interest, which is on the critical path (longest path) if and only if its length is greater than a threshold. Therefore, the new algorithm is named Threshold Arc Criticality (TAC). By applying Infinitesimal Perturbation Analysis (IPA) to TAC, an unbiased estimator of the stochastic derivative of the arc criticalities with respect to parameters of arc length distributions can be derived. With a valid estimator of stochastic derivative of arc criticalities, sensitivity analysis of arc criticalities is carried out via simulation of a small test network.
pdf
“New Entries with Cooperation” Game in the Mobile Telecommunication Market
Norihiro Hayakawa (University of Tsukuba)
Abstract Abstract
The structural barriers to entry to the mobile telecommunication market are discussed. The government, especially the Japanese government, frequently tried to facilitate new entries into the oligopoly market by introducing new regulations. However, there are many cases that the new MNOs (Mobile Network Operators) tied up with the existing MNOs. It implies a few MNO rarely succeed in the entry of the market independently.
We develop a Cournot(-type) competition game where the cooperative action is strategically executed by every player in the market. By employing real market data, we analyze the MNOs’ cost structure. We evaluate the structural barriers to entry to the market by the likelihood of the cooperation between MNOs. We found the market with small MNOs’ share distribution is the market with high entry barriers. It shows that “one of the structural barriers to entry to the market is the distribution of share achieved by small MNOs.
pdf
Track Coordinator - Poster Session: María Julia Blas (INGAR CONICET UTN), Cristina Ruiz-Martín (Carleton University)
Poster Session
Poster Session
Chair: María Julia Blas (INGAR CONICET UTN); Cristina Ruiz-Martín (Carleton University)
Optimal Design of Building Envelopes for an Office Building Using Bayesian Optimization
Young-Sub Kim and Cheol-Soo Park (Seoul National University)
Abstract Abstract
Design of building envelopes is one of the key processes to reduce Energy Use Intensity (EUI, kwh/㎡.yr). For the process, two approaches exist: prescriptive and performance based. The former is easy to use, but does not allow design flexibility. The latter could be more effective than the former but is difficult to find optimal combination of design variables in an infinite option space. In addition, design parameters are interrelated. This study aims to use Bayesian Optimization (BO) to find optimal design parameters minimizing EUI for a given office building. It was found that BO can be beneficially used to an optimum.
pdf
Predictive Uncertainty of Residential Building Energy Model
Youngsik Choi and Cheol-Soo Park (Seoul National University)
Abstract Abstract
For optimal design and control of residential buildings, high-performance simulation models are required. In general, the model’s performance is evaluated in terms of accuracy (MBE, CVRMSE). However, high-accurate models may not always be reliable due to its inherent ‘predictive’ uncertainty. It is found that even accurate simulation models that meet the ASHRAE(American Society of Heating, Refrigerating and Air-conditioning Engineers, 2014) guidelines could produce significant predictive uncertainty. In this study, an energy model of a residential building was analyzed in terms of predictive uncertainty.
pdf
Predictability of Building Energy Simulation for Existing Buildings
Han Sol Shin (Seoul National University), Deuk Woo Kim (Korea Institute of Civil Engineering and Building Technology), and Cheol Soo Park (Seoul National University)
Abstract Abstract
Uncertainty analysis of building energy simulation models has been actively studied over the past two decades. Uncertainty analysis is performed in the belief that reflecting the influence of uncertain variables in the simulation process can reduce the performance gap between simulation prediction and the reality. If the monthly energy use for at least three years are similar under similar weather conditions, the energy use can be regarded as being predictable and the uncertainty analysis helps. However, it is important to investigate whether the opposite case might exist that the monthly energy use of a building is unpredictable (not repeating) under similar weather over several years. In this paper, the predictability of 3,157 buildings’ energy use was analyzed using K-Spectral Centroid distance and Maximal Information Coefficient. As a result, a significant portion of buildings are outside of the range of the uncertainty analysis, meant by ‘being unpredictable’ by a simulation model.
pdf
Using Discrete-Event Simulation for Potential Analysis of Predictive Maintenance in Semiconductor Manufacturing
Patrick Moder (Infineon Technologies AG); Daniel Fischer (Infineon Technologies AG, Munich University of Applied Sciences); and Hans Ehm (Infineon Technologies AG)
Abstract Abstract
Predictive Maintenance (PdM) offers one possibility to improve productivity in semiconductor manufacturing. Current research on PdM mainly focuses on its technical implementation. By applying discrete-event simulation, we provide results how maintenance strategies influence operational performance, and how PdM contributes to an overall improvement of productivity in wafer fabrication.
pdf
Occupant-Adaptive Indoor Environmental Controller Using DQN
Seongkwon Cho and Cheol-Soo Park (Seoul National University)
Abstract Abstract
Heating, ventilation, and air conditioning (HVAC) controller should be energy-efficient and responsive to occupant’s personal preference. Recently, Deep reinforcement learning (DRL) has received increasing attention due to its capability of learning and adapting to building dynamics and occupant behavior. In this study, the authors use deep Q-network to control two air conditioners of a residential building. The agent of DQN is trained to learn thermal preference of virtual occupants and to take appropriate control actions that can achieve energy savings and occupant thermal satisfaction. The simulation results show that the DQN controller adapts itself to personal thermal preference while successfully reducing energy consumption of building systems.
pdf
Simulation Metamodeling to Support Hospital Capacity Planning
John Maleyeff, Canan Gunes Corlu, and Xinzhuo Wang (Boston University)
Abstract Abstract
A metamodeling approach is used to reduce the run time of a simulation-based decision support system targeted to healthcare administrators for capacity planning. However, the lengthy simulation run time precludes practical use. A least-squared regression equation is used to find the optimal server utilization, which is found at the “knee” of the server utilization-wait time curve. The regression model fits the simulation output data well and confirmation runs that compare simulation results with regression model predictions confirm accuracy.
pdf
Simulation Projects with Computer Science Undergraduate Students
Tom Warnke and Adelinde M. Uhrmacher (University of Rostock)
Abstract Abstract
We report on our recent experiences with a project-based simulation course in the third year of the Computer Science curriculum at the University of Rostock. Students developed and implemented simulation models of queuing systems in the campus canteen or traffic flow at an intersection close to the campus. We pre-structured the projects into milestones and acquired real-world data for model input or validation. Both of these aspects were evaluated positively by the participants. We also report on experiences with managing and grading students in a collaborative group project.
pdf
A Discrete Event Simulation to Facilitate Hybrid Production Planning in a Paper Production Plant
Adhurim Imeri and Christian Fikar (Vienna University of Economics and Business (WU))
Abstract Abstract
This study investigates a paper production facility that markets its products in Europe and its
producing strategy is purely make-to-order (MTO). The effects of deploying a make-to-stock
(MTS) production strategy for specific end products is analysed on a scenario building basis
via a discrete event simulation model. Initially, the current situation where the production
strategy is purely MTO was simulated and validated. Afterwards, end products were evaluated
if they can be produced as MTS. Products that are qualified to be produced as MTS had their
simulated production schedule developed on a spreadsheet-based model. Forecasting for the
new production schedule was based on four time series forecasting methods. Results of various
simulation runs highlight that due to postponement, the MTS production strategy for the
qualified final products leads to a leaner warehouse flow.
pdf
Facilitating Resilience of Domestic Pork Supply Chains Through Hybrid Simulation Modeling
Christian Fikar (WU Vienna University of Economics and Business) and Yvonne Kummer, Klaus-Dieter Rest, and Patrick Hirsch (University of Natural Resources and Life Sciences, Vienna)
Abstract Abstract
This work develops a simulation-based decision support system for governmental organizations to facilitate resilient pork supply chains. The focus is set on a sudden outbreak of African swine fever (ASF) and its impact on food security as well as the identification of resulting bottlenecks in related logistics and disposal operations. The problem is modeled as a hybrid simulation considering various farm types and production and logistics facilities of the supply chain. Discrete-event elements represent production steps, while agent-based ones consider farmers’ decision making processes over time. Based on real-world data from Austria, various crisis scenarios are simulated tracking animal populations and contacts among individual animals over time. Results highlight the importance of better knowledge of relevant bottlenecks to react fast to such a crisis as well as major benefits of improved visibility within the supply chain.
pdf
A Contingency Planning Toolbox in the Wood Supply Chain
Christoph Kogler and Peter Rauch (University of Natural Resources and Life Sciences, Vienna)
Abstract Abstract
The climate crisis challenges wood supply chain management by more frequent and extreme natural calamities. Although, insights, literature and tools on contingency planning for multimodal wood supply chains are missing. Consequently, bottleneck and queuing time analyses were performed with a discrete event simulation model for the wood supply chain to investigate key performance indicators such as truck to wagon ratios, truck and wagon utilization, worktime coordination, truck queuing times, terminal transshipment volume, and required stockyard. This enabled the development of a contingency planning toolbox consisting of transport strategies, frameworks and templates to analyze outcomes of planning decisions before real, inefficient, unsustainable and long lasting changes are made. Thereby, a special focus was set on close to reality delivery time, transport tonnage, and train pick-up scenarios for highly relevant business cases to provide rapid, straightforward and helpful decision support for short term contingency planning.
pdf
Modeling and Simulation for Time-accurate and Stochastic Analysis of Algorithms
Abdurrahman Alshareef (King Saud University)
Abstract Abstract
We propose a framework to perform algorithms analysis based on modeling and simulation. The framework supports time-accurate analysis of the target algorithm for a given input or set of inputs. It also can support a stochastic analysis by extension to allow the model to assign time advances according to some probability distribution. The framework facilitates performing the analysis of computational models within systems in a modularized manner. The granularity of the analyzed instruction as well as the time structure can be determined during the modeling process using the Activity abstraction and DEVS-based support for the simulation. The initial portion of the simulation models is generated automatically and then modified afterward to account for more concrete and operational instructions.
pdf
Hybrid Simulation Model for Virus Transmission on the Diamond Princess Cruise Ship
Jiaqi Lei (University of Michigan)
Abstract Abstract
We propose a hybrid simulation model structure for accurate prediction of the COVID-19 transmission on the Diamond Princess cruise ship where a fixed population mixes and has transmissible contacts among different types of small-scale facilities such as restaurants or dorms. We use the agent-based model (ABM) and discrete-event simulation (DES) to predict the infected population in enclosed facilities and the overall transmission dynamics, respectively. The actual infection data fall into the 95% confidence interval of this hybrid model output with the basic reproduction number R0=2.38.
pdf
Quantifying Uncertainty in Sensitivity Analysis of Building Energy Simulation Model
Young Seo Yoo, Dong Hyuk Yi, and Cheol Soo Park (Seoul National University)
Abstract Abstract
Sensitivity analysis is important in rational decision making because it can identify meaningful design variables for energy-efficient design of new buildings or energy retrofit of existing buildings. However, it is often overlooked that sensitivity analysis itself is also influenced by uncertain parameters such as occupant behavior, infiltration, varying setpoint temperatures for cooling and heating, etc. With this in mind, this study investigates the degree of uncertainty in sensitivity analysis for a given office building. For energy analysis, EnergyPlus, a dynamic building energy simulation tool, developed by US DOE was employed. For uncertainty and sensitivity analyses, Latin Hypercube Sampling and Sobol were used. It is found that uncertainty in sensitivity analysis is significant and careful attention must be paid to selection of uncertain environmental factors, and corresponding engineering assumptions.
pdf
A Hydrologic Process Model for Watershed Sustainability: A System Dynamics Approach
Raymond L. Smith III and James Randall Etheridge (East Carolina University)
Abstract Abstract
Lake Mattamuskeet is a coastal watershed located in eastern North Carolina that contributes to the economic, social, agricultural, environmental, and natural wildlife of the region. Declining water quality issues, chronic inundation events, significant water fluctuation, poor flushing, and sea-level rise threaten this resource. Community stakeholders have suggested numerous proposals to restore the watershed; however, due to the complexities involved with this human-modified natural system, estimating the benefits of these proposals, which are very expensive, can be challenging. Supporting the restoration effort, a hydrologic process model for the watershed using a water budget was constructed using a system dynamics approach. This presentation examines the stakeholder favored solution of dredging the outflow canals aimed at increasing flow capacity. The impact of sea-level rise is also considered. The hydrological process model provides a valuable resource to discuss solutions and tradeoffs with a diverse stakeholder group.
pdf
Traffic Signal Control Simulation and Optimization
Yunsoo Ha and Sara Shashaani (North Carolina State University)
Abstract Abstract
The goal is to study the urban traffic signal control problem as a discrete-event simulation. We explore a network-based design and verify the sample-path behavior of the average cycle time as the objective function. We compare the performance of different Simulation Optimization solvers available on SimOpt library and discuss takeaways for traffic control structures.
pdf
Reducing Handoffs and Improving Patient Flow in the ED
Vishnunarayan Girishan Prabhu and Kevin Taaffe (Clemson University) and Ronald Pirrallo, William Jackson, and Michael Ramsay (Prisma Health-Upstate)
Abstract Abstract
Over 145 million people visit US Emergency Departments annually. The diverse nature and overwhelming volume of patient visits make the ED one of the most complicated healthcare settings. In particular, handoffs, the transfer of patient care from one physician to another during shift transition are a common source of errors resulting from workflow interruptions and high cognitive workload. This research focuses on developing a hybrid agent-based discrete event simulation model to identify physician shifts that minimize handoffs without affecting other performance metrics. By providing overlapping shift schedules as well as implementing policies that restrict physicians from signing up a new patient during the last hour of the shift, we observed that handoffs and patient time in the emergency department could be reduced by as much as 46% and 31%, respectively.
pdf
Towards a Generic Architecture for Symbiotic Simulation System-based Digital Twin
Chukwudi Nwogu (Modelling & Simulation Group, Brunel University London)
Abstract Abstract
Digital twin (DT)—one of the most prominent technologies in the Industry 4.0 era—has the ability to integrate the physical and virtual worlds, such that the data exchanged between them could be used to support decision-making and improve operations. DT could benefit from advancement in symbiotic simulation system (SSS), which has been used for so many years, prior to the advent of Industry 4.0, to interact with physical systems and support decision-making using the data from their interaction. Therefore, this paper will propose an SSS-based generic architecture for DT that satisfies DT requirements.
pdf
An Investigation of Hybrid Simulation for Behavioral Analysis in Healthcare
Athary Alwasel and Masoud Fakhimi (University of Surrey), Lampros Stergioulas (The Hague University of Applied Sciences), and Wolfgang Garn (University of Surrey)
Abstract Abstract
Modeling and Simulation (M&S) studies are used in healthcare to gain insights into different systems of interest and assist decision-makers. There is a lack of studies that focus on developing frameworks and models that incorporate human factors. Due to the nature of behavioral analysis in healthcare, and in order to include underlying factors that may influence behavior patterns, this study aims to develop a conceptual M&S framework for behavioral analytics by applying hybrid approaches. M&S for behavioral analysis seeks to understand and explore the behavior of individuals and how they react to certain systemic interventions. This research proposes a hybrid M&S approach that relies on both qualitative and quantitative elements for the purpose of modeling operational and human behavioral aspects in healthcare systems. This work contributes to the literature on M&S for behavioral analysis, extending the applicability of soft OR methods in hybrid simulation studies.
pdf
Comparison Between Conventional Design and Integrated Simulation-Based Optimal Design for an Office Building
Chul-Hong Park and Cheol-Soo Park (Seoul National University)
Abstract Abstract
It is widely acknowledged that optimal building design can be well achieved by selecting appropriate design options at each design stage (conceptual design, preliminary design, final design), e.g. orientation, shape → building envelopes → HVAC, controls. This sequential optimal design approach is conventional and is strongly recommended for architectural design practice. In contrast, an integrated optimal design approach exists where all design parameters are optimized simultaneously. In this paper, the aforementioned two design approaches are compared for a given office building in terms of selected optimal design variables and thermal load (kWh/m2.yr). EnergyPlus, a dynamic building energy simulation tool developed by U.S. DOE, was used. It is found that the integrated optimal design approach could allow more design options and better energy-efficient outcome than the conventional approach. For further study, the impact of decision making at each design stage on the gap between the two design approaches will be investigated.
pdf
Illuminance Prediction Using a Hybrid Simulation Model Based on a Single Reference Measurement
Hyeong Gon Jo and Cheol Soo Park (Seoul National University)
Abstract Abstract
This paper suggest a methodology for predicting illuminance at multiple points using a minimalistic sensor(s). This study presents that a daylit illuminance measured at a reference point can be good enough to predict daylit illuminances at different points when enhanced by Radiance simulation, called a hybrid daylighting simulation approach. This approach was tested at a mechanical factory building for seven days. It is found that this approach can predict illuminances at multiple points of interest and can be beneficially applied to dimming control of existing buildings.
pdf
Classification of Building Energy Pattern Based on RANSAC and K-Shape Algorithm
Donghyuk Yi and Young Seo Yoo (Seoul National University)
Abstract Abstract
For efficient energy management of buildings, it is important to classify building energy pattern where dominant design variables could be identified. In this regard, the authors present a classification method of building energy pattern using RANSAC and K-shape algorithm. A reference office building was chosen and populated using the Sobol sequence method, resulting in 1,000 samples. Energy use data of 1,000 sample buildings were obtained from EnergyPlus, a dynamic building simulation tool developed by US DOE. The relationship between outdoor temperature and building energy use was selected as building energy pattern. It is shown that 1,000 buildings can be classified into two clusters (heating dominant vs. cooling dominant) and corresponding four architectural design variables.
pdf
Using UML and OCL as Languages to Define DEVS Atomic Models
María Julia Blas and Silvio Gonnet (UTN-CONICET)
Abstract Abstract
This paper presents a work-in-progress intended to define the foundations for building a representation of DEVS using conceptual modeling languages from information system engineering. We use UML and OCL languages to define a metamodel that conceptualizes atomic DEVS models. Such a representation enhances the DEVS modeling activity providing atomic model definitions as instances of the metamodel developed.
pdf
Impacts on Airport Elevator System When Exposed to Disruptive Events: A Discrete Event Simulation Approach
Guilherme S. Zapola, Yago F. Gomes, Evandro J. Silva, and Cláudio J. P. Alves (Aeronautics Institute of Technology)
Abstract Abstract
In this paper we present the results of a Discrete Event Simulation (DES) of an elevator system that connects the arrivals hall and the baggage claim room of a fictitious airport. Disruptive events (DE) were simulated causing the elevator system to stop for some time intervals. Then, the impacts of these DE on the sizing of the area for processing and operation were analyzed. We conclude that elevator failure highly impacts nearby circulation areas, where passengers wait for the elevator service. Such impact is not covered by current guidelines, which consider airport components by means of an isolated, analytical approach.
pdf
Co-simulation of Composable Cellular Automata DEVS and Diffusion PDE Models
Chao Zhang and Hessam S. Sarjoughian (Arizona State University)
Abstract Abstract
It is beneficial to gain insight into cancer biology by modeling and simulating cancerous cells interacting with their environment. The diffusion and agent-based models can define individually continuous and discrete dynamics of different kinds of human cells toward achieving this goal. On the one hand, the continuous diffusion modeling approach is appropriate to define the dynamics gradient formation of CXCL12 chemokines. On the other hand, the Composable Cellular Automata DEVS modeling approach lends itself to defining the CXCR4+ and CXCR7+ cells chemotaxis movement subject to the chemokine gradient. The OpenModelica and DEVS-Suite simulators are well-suited for modeling and simulating diffusion and agent dynamics useful for the study of cancer biology. They are integrated using the FMI standard to develop and co-simulate continuous and event-driven models.
pdf
Analysing Supply Chain Factors Affecting Antibiotics Shortage with System Dynamics Simulation
Ines Julia Khadri (Uppsala University)
Abstract Abstract
Antibiotics supply is continuously facing disruptions and markets are suffering from shortages at a global level. The antibiotics supply chain is a dynamically complex system. A number of issues are affecting the availability of these essential drugs. Upstream issues are prevailingly influencing the production and downstream issues are influencing decision makers from a business profitability perspective. Inter-organization cooperation and transparency are also affecting antibiotics supply. Looking into the Swedish market, this study aims at mapping the factors causing antibiotics insufficiency from the entire system’s viewpoint. A causal loop diagram model is used to map the causal relationships among these factors and with the core issue of shortage, then a system dynamics simulation will quantitatively assess their degree of influence. Starting from both theoretical and empirical lenses, this work is directed from analyzing the dynamic supply issue towards proving support to decision and policy makers for optimizing the supply of antibiotics.
pdf
A Social Cost Based Dynamic Restoration Decision-making Modelling Framework for Power Distribution System
Sudipta Chowdhury and Jin Zhu (University of Connecticut)
Abstract Abstract
Prioritizing restoration of critical facilities is important as delaying restoration may cause significant social distress. Few studies have investigated the social impact of power outages in these facilities and its influence on the restoration process. This study proposes a simulation modeling framework that investigates the dynamics of restoration decision-making based on the social impact of disrupted critical facilities. It was tested in a case study using storm-related power disruption and restoration data from a power utility company. Numerical results indicated that incorporating social impacts into decision-making significantly reduces social cost of power outage by almost 26.73% in the case study. Moreover, through simulation experiments, a cost-benefit analysis based on the restoration labor cost and reduction in social impacts under different resource allocation strategies was conducted and a tradeoff relationship was identified. The findings highlight the capability of the model in aiding decision-makers make informed decisions after a power outage.
pdf
A Modelling Base to Support Long-term Care Comparative Studies by an Agent-based Simulation Approach
Shuang Chang and Hiroshi Deguchi (Tokyo Institute of Technology)
Abstract Abstract
This work aims to construct a modelling base to enable the deployment of an agent-based simulation approach in comparing and evaluating long-term care (LTC) models. By leveraging advanced simulation approaches with latest empirical data, this modelling base can be used to investigate the key question that to what extent the varied LTC design may influence LTC system outcomes, in terms of equity and efficiency, if they are instituted in different population structures. A method to construct the modelling base composed by individuals, households, and service providers of approximated target distributions is proposed. Global standardized datasets on healthcare issues of the aging population are analyzed and converted into computational agents equipped with attributes and behavioral rules. Optimization methods are integrated to tune the joint-distribution of individual attributes and household structures. Different LTC models can be implemented computationally on this modelling base for future LTC comparative studies.
pdf
Towards an Agent-Based Model for Sustainable Agricultural Practices on Scottish Farms
Matthew Thomas Hutcheson, Shona Blair, and Alec Morton (University of Strathclyde)
Abstract Abstract
Recent years have seen increasing awareness of the impact of conventional agricultural practices on our planet. High input farming systems have been successful in dramatically increasing food production for an exponentially growing global population, but they are also leading contributors to GHG emissions and biodiversity loss. There is therefore much interest in alternative, more sustainable, farming practices. We propose the development of an agent-based model, in close collaboration with Scottish farmers, to explore land-use decision-making. It is envisaged that this study will give insight into the uptake of more sustainable farming practices in Scotland, while also enhancing understanding of the utility of agent-based modelling for agroecological systems.
pdf
A Simulation-Optimization Approach to Design a Resilient Food Supply Network for Natural Disaster Responses in West Java
Duc-Cuong Dang, Christine Currie, and Stephan Onggo (University of Southampton) and Tomy Perdana, Diah Chaerani, and Cipta Endyana (Universitas Padjadjaran)
Abstract Abstract
We present the development of a simulation-optimization approach to build a resilient food supply network for natural disaster responses in West Java, Indonesia. The objective of the model is to support disaster preparedness and response decisions, such as warehouse locations, replenishment policies and response allocations. Our approach is designed to understand the trade-off among the two criteria of effectiveness, ie. fulfilment of the demands, and of efficiency, ie. response time, storage and operational costs. It will allow both the evaluation of the current system, which is managed separately for different food items, and the investigation of the coordinated use of shared resources.
pdf
|