The reduction of losses related to hurricanes and other extreme weather phenomena involves many complex aspects ranging from purely theoretical, observational, computational and numerical, to operational and decisional. A correct warning can lead to proper evacuation and damage mitigation, and produce immense benefits. However, over-warning can lead to substantial unnecessary costs, a reduction of confidence in warnings, and a lack of appropriate response. In this chain of information, the role played by scientific research is crucial. The National Oceanic and Atmospheric Administration (NOAA), in combination with the National Aeronautics and Space Administration (NASA), other agencies, and universities is contributing to these efforts through observational and theoretical research to better understand the processes associated with extreme weather. This includes model and data assimilation development, Observing System Experiments (OSE), and Observing System Simulation Experiments (OSSE) designed to ascertain the value of existing observing systems and the potential of new observing systems to improve weather prediction and theoretical understanding. We describe innovative research for developing advanced next-generation global and regional models to improve weather prediction, and the application of OSSEs to optimize the observing system.
The urban environment is becoming increasingly more connected and complex. In the coming decades, we will be surrounded by billions of sensors, devices, and machines, the Internet of Things (IoT). As the world becomes more connected, we will become dependent on machines and simulation to make decisions on our behalf. When simulation systems use data from sensors, devices and machines (i.e., things) to make decisions, they need to learn how to trust that data, as well as the things they are interacting with. As embedded simulation becomes more commonplace in IoT and smart city applications, it is essential that decision makers are able to trust the simulation systems making decisions on their behalf. This paper looks at trust from an IoT perspective, describing a set of research projects conducted that span multiple dimensions of trust, and discusses whether these concepts of trust apply to simulation.
Simulation has become the critical tool in understanding uncertainty in a wide range of disciplines. In this talk, we will discuss how simulation is influencing both stochastics and statistics, and also discuss insights for the simulation modeler that derive from these methodological areas. We will also talk about how the interactions between simulation, stochastics, and statistics are changing in light of advances in machine learning, increasing data volumes, and the growing availability of inexpensive parallel computing platforms.
We present a method to apply simulations to the tracking of a live event such as an evacuation. We assume only a limited amount of information is available as the event is ongoing, through population-counting sensors such as surveillance cameras. In this context, agent-based models provide a useful ability to simulate individual behaviors and relationships among members of a population; however, agent-based models also introduce a significant data-association challenge when used with population-counting sensors that do not specifically identify agents. The main contribution of this paper is to develop an efficient method for managing the combinatorial complexity of data association. The key to our approach is to map from the state-space to an alternative correspondence-vector domain, where the measurement update can be implemented efficiently. We present a simulation study involving an evacuation over a road network and show that our method allows close tracking of the population over time.
Data-driven Optimal Transport Cost Selection for Distributionally Robust Optimization
Best Contributed Theoretical Paper - Finalist
Jose Blanchet (Stanford University), Yang Kang (Columbia University), Karthyek Murthy (Singapore University of Technology & Design), and Fan Zhang (Stanford University)
Some recent works showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems. The distributional uncertainty set is defined as a neighborhood centered at the empirical distribution, and the neighborhood is measured by optimal transport distance. In this paper, we propose a methodology which learns such neighborhood in a natural data-driven way. We show rigorously that our framework encompasses adaptive regularization as a particular case. Moreover, we demonstrate empirically that our proposed methodology is able to improve upon a wide range of popular machine learning estimators.
Simulation solution validation concerns the comparison between the expected and actual performance of a solution provided by a simulation model. Such a comparison might become challenging when not only the implementation of the solution changed the environment, but also the processes and data have changed. We illustrate this challenge using a case study at an Integrated Emergency Post (IEP), which is a collaboration between a general practitioners post and a hospital’s emergency department to provide out-of-hours emergency care. After performing a simulation study, our solution has been implemented, after which data has been gathered for two years. We validated the solution by performing various comparisons, using simulated and realized performance, under the original and changed data and processes, and with and without the proposed solution. We propose a solution validation framework to structure these comparisons, and provide key insights regarding solution validation, using our case study at the IEP.
More than two decades ago, Butler and Finelli examined the problem of experimentally demonstrating the reliability of safety critical software and concluded that it was impractical. We revisit this conclusion in the light of recent advances in computer system virtualization technology and the capability to link virtualization tools to simulation models of physical environments. A specific demonstration of testing for reliability is offered using software that is part of a building control system. Extrapolating the results of this demonstration, we conclude that experimental demonstrations of high reliability may now be feasible for some applications.
Truck platooning is the concept of multiple trucks driving at aerodynamically efficient inter-vehicle distances in a cooperative and semi-autonomous fashion. Advanced sensor technology and wireless communication is used to maintain short and safe following distances between the trucks. This paper proposes an agent-based simulation model to evaluate a matchmaking system for trucks to find a suitable partner to platoon with. We consider two types of platoon matching: real-time (at a truck stop) and opportunistic (while driving on the highway). We evaluate the proposed system using a case study at the Port of Rotterdam and the surrounding area, where we study various factors influencing platoon formation and profitability. Results show that the most influential factors in both platoon formation and the total platoon profitability are wage savings and the possibility of different truck brands to platoon together.
In many industrial manufacturing companies, energy has become a major cost factor. Energy aspects are included in the decision-making system of production planning and control to reduce manufacturing costs. For this priority, the simulation of production processes requires not only the consideration of logistical and technical production factors but also the integration of time-dependent energy flows which are continuous in nature. A hybrid simulation, using a continuous approach to depict the energy demand of production processes in combination with a discrete approach to map the material flows and logistic processes, shows the complex interactions between material flow and energy usage in production closer to reality. This paper presents a hybrid simulation approach combining System Dynamics, Discrete-Event and Agent-Based Simulation for energy efficiency analysis in production, considering the energy consumption in the context of planning and scheduling operations and applying it to a use-case scenario of mechanical processing of die-cast parts.
We compare two approaches for quantile estimation via randomized quasi-Monte Carlo
(RQMC) in an asymptotic setting where the number of randomizations for RQMC grows large but the size of the low-discrepancy point set remains fixed. In the first method, for each randomization, we compute an estimator of the cumulative distribution function (CDF), which is inverted to obtain a quantile estimator, and the overall quantile estimator is the sample average of the quantile estimators across randomizations. The second approach instead computes a single quantile estimator by inverting one CDF estimator across all randomizations. Because quantile estimators are generally biased, the first method leads to an estimator that does not converge to the true quantile as the number of randomizations goes to infinity. In contrast, the second estimator does, and we establish a central limit theorem for it. Numerical results further illustrate these points.
This paper considers the problem of choosing the best design alternative under a small simulation budget where making inferences about all alternatives from a single observation could enhance the probability of correct selection. We propose a new selection rule exploiting the relative similarity information between pairs of alternatives and show its improvement on selection performance, evaluated by the probability of correct selection, compared to selection based on collected sample averages. We illustrate the effectiveness by applying our selection index on simulated ranking and selection problems using two well-known budget allocation policies.
This paper presents a spline-based input modelling method for inferring the intensity function of a nonhomogeneous Poisson process (NHPP) given arrival-time observations. A simple method for generating arrivals from the resulting intensity function is also presented. Splines are a natural choice for modelling intensity functions as they are smooth by construction, and highly flexible. Although flexibility is an advantage in terms of reducing the bias with respect to the true intensity function, it can lead to overfitting. Our method is therefore based on maximising the penalised NHPP log-likelihood, where the penalty is a measure of rapid changes in the spline-based representation. An empirical comparison of the spline-based method against two recently developed input modelling techniques is presented, along with an illustration of the method given arrivals from a real-world accident and emergency (A&E) department.
We propose and evaluate novel sampling policies for a Bayesian ranking and selection problem with pairwise comparisons. We introduce the lookahead contraction principle and apply it to three types of value factors for lookahead policies. The resulting lookahead contraction policies are analyzed both with the minimal number of lookahead steps required for obtaining informative value factors, and with fixed number of lookahead steps. We show that lookahead contraction reduces the minimal number of required lookahead steps, and that contraction guarantees finiteness of the minimal lookahead. For minimal lookahead we demonstrate empirically that lookahead contraction never leads to worse performance, and that lookahead contraction policies based on expected value of improvement perform best. For fixed lookahead, we show that all lookahead contraction policies eventually outperform their counterparts without contraction, and that contraction results in a performance boost for policies based on predictive probability of improvement.
This advanced tutorial introduces the engineering principles of combat modeling and distributed simulation. It starts with the historical context and introduces terms and definitions as well as guidelines of interest in this domain. The combat modeling section introduces the main concepts for modeling of the environment, movement, effects, sensing, communications, and decision making. The distributed simulation section focuses on the challenges of current simulation interoperability standards that support dealing with them. Overall, the tutorial shall introduce the scholar to the operational view (what needs to be modeled), the conceptual view (how to do combat modeling), and the technical view (how to conduct distributed simulation).
The benefits of Hybrid Simulation (HS) are well recorded in academic literature. It offers deeper insights into the real-life system as it allows modelers to assess its inherent problems from different dimensions. As a result HS has recently generated more attention from within the Modeling and Simulation arena. HS comes in many shapes and forms. For example, by linking two or more simulation models; linking simulation models with facilitative models; or linking simulation models with analytical models. This paper aims to explore several concepts related to HS modelling and design.
Discrete Event System Specification (DEVS) is a mathematical formalism to model and simulate discrete-event dynamic systems. The advantages of DEVS include a rigorous formal definition of models and a well-defined mechanism for modular composition. In this tutorial, we introduce Cadmium, a new DEVS simulator. Cadmium is a C++17 header only DEVS simulator easy to in-clude and to integrate into different projects. We discuss the tool’s Application Programming Inter-face, the simulation algorithms used and its implementation. We present a case study as an example to explain how to implement DEVS models in Cadmium.
Modern enterprises are large complex systems operating in highly dynamic environments thus requiring quick response to a variety of change drivers. Moreover, they are systems of systems wherein understanding is available in localized contexts only and that too is typically partial and uncertain. With the overall system behaviour hard to know a-priori and conventional techniques for system-wide analysis either lacking in rigour or defeated by the scale of the problem, the current practice often exclusively relies on human expertise for monitoring and adaptation. We present an approach that combines ideas from modeling & simulation, reinforcement learning and control theory to make enterprises adaptive. The approach hinges on the concept of Digital Twin - a set of relevant models that are amenable to analysis and simulation. The paper describes illustration of approach in two real world use cases.
Paolo Bocciarelli (University of Rome Tor Vergata), Andrea D'Ambrogio (University of Roma Tor Vergata), and Andrea Giglio and Emiliano Paglia (University of Rome Tor Vergata)
Simulation-based analysis is widely recognized as an effective technique to support verification and validation of complex systems throughout their lifecycle. The inherently distributed nature of complex systems makes the use of distributed simulation approaches a natural fit. However, the development of a distributed simulation is by itself a challenging task in terms of effort and required know-how. This tutorial introduces an approach that applies highly automated model-driven engineering principles and standards to ease the development of distributed simulations. The proposed approach is framed around the development process defined by the DSEEP (Distributed Simulation Engineering and Execution Process) standard, as applied to distributed simulations based on the HLA (High Level Architecture), and is focused on a chain of automated model transformations. A case study is used in the tutorial to illustrate an example application of the proposed model-driven approach to the development of an HLA-based distributed simulation of a space system.
Operating System (OS) is a well-known concept in computer science as an interface between human and computer hardware (Windows, IOS…). In the perspective of developing future generation of enterprise systems based on IoT and Cyber-Physical System principles, this paper proposes to develop an Enterprise Operating System (EOS).
Unlike ERP, which is defined as a platform that allows the organization at the operational level to use a system of integrated applications in order to automate many back office functions related to technology and services; and unlike the Best of Breed (BOB), which is defined as a leading software that focuses on providing the features and functions for one component in the value chain by maintaining multiple systems; EOS will act as an interface between enterprise business managers and resources for supporting the distributed simulations and allowing interoperability over the heterogeneous environments by ensuring real time monitoring and control of enterprise operations.
SIPmath represents uncertainties as coherent arrays of realizations called SIPs, which may be shared between diverse simulation applications across the enterprise. This allows simulations to be linked together to form networks in which the output distributions of one simulation become the input distributions of another simulation. Furthermore, the outputs of stochastic models in packages such as R, Python or discrete event simulations may be shared with managers using interactive dashboards in native Microsoft Excel. Two recent open source advances in simulation modeling and analysis have yielded great efficiencies in this approach. Metalog distributions can fit virtually any continuous distribution of data with an analytical F-Inverse function much like a Taylor’s series. The HDR Portable Uniform Random Number Generator produces identical results on all platforms including a single cell formula in Excel. Participants are encouraged to bring their laptops for a hands-on learning experience.
As clinical trials are increasingly globalized with complex footprints over hundreds of sites worldwide, sponsors and contract research organizations constantly seek to make better and faster decisions on their investigational products, and drug supply planning must evolve to ensure efficient, effective supply chain for every study. This endeavor is challenging due to unique characteristics of multi-center trials including randomization schemes for multiple treatment arms, finite recruitment target (that is, across all sites, only a finite number of subjects need be satisfied) and uncertainty in recruitment rates, etc. Simulation has great potential for being the ideal tool which companies can utilize to make better decisions with considerations to both supply chain risks and costs. To achieve this goal, it is important to understand the specifics of clinical trial supply chains. Built upon this knowledge, our paper provides an advanced tutorial on modeling and practical considerations in clinical supply simulation.
Truck platooning is the concept of multiple trucks driving at aerodynamically efficient inter-vehicle distances in a cooperative and semi-autonomous fashion. Advanced sensor technology and wireless communication is used to maintain short and safe following distances between the trucks. This paper proposes an agent-based simulation model to evaluate a matchmaking system for trucks to find a suitable partner to platoon with. We consider two types of platoon matching: real-time (at a truck stop) and opportunistic (while driving on the highway). We evaluate the proposed system using a case study at the Port of Rotterdam and the surrounding area, where we study various factors influencing platoon formation and profitability. Results show that the most influential factors in both platoon formation and the total platoon profitability are wage savings and the possibility of different truck brands to platoon together.
Elevator traffic affects people who use elevators in high-rise buildings. This occurs due to elevators operating to cater passengers above its intended capacity. The next set of passengers still needs to wait approximately four minutes before they are serviced even if the elevators implement a static zoning division to reduce waiting time during peak hours. Hence, there is a need to optimize current systems. And to better understand how the system works along with its pitfalls, the environment will be simulated using agent-based modeling. The simulation will be modeled and fitted using data gathered from ID scans and CCTV footages. Through simulation, it would be possible to visualize the behavior inside the building and minimize elevator traffic. Removing the elevator service going down optimized the current system and reduced the round trip time of the elevators and waiting time of the passengers on high-rise buildings that use static zoning division.
Modern enterprises aim to achieve their business goals while operating in a competitive and dynamic environment. This requires that these enterprises need be efficient, adaptive and amenable for continuous transformation. However, identifying effective control measures, adaptation choices and transformation options for a specific enterprise goal is often both a challenging and expensive task for most of the complex enterprises. The construction of a high-fidelity digital-twin to evaluate the efficacy of a range of control measures, adaptation choices and transformation options is considered to be a cost effective approach for engineering disciplines. This paper presents a novel approach to analogously utilise the concept of digital twin in controlling and adapting large complex business enterprises, and demonstrates its efficacy using a set of adaptation scenarios of a large university.
Chair: Maurizio Tomasella (University of Edinburgh)
On the Modeling and Agent-based Simulation of a Cooperative Group Anagram Game
Zhihao Hu and Xinwei Deng (Virginia Tech); Abhijin Adiga (University of Virginia); Gizem Korkmaz (University of Virginia, 1964); Chris J. Kuhlman, Dustin Machi, Madhav V. Marathe, and S. S. Ravi (University of Virginia); Yihui Ren (Brookhaven National Laboratory); Vanessa Cedeno-Mieles (Virginia Tech); Saliya Ekanayake (Lawrence Berkeley National Laboratory); and Brian J. Goode, Naren Ramakrishnan, Parang Saraf, and Nathan Self (Virginia Tech)
Anagram games (i.e., word construction games in which players use letters to form words) have been researched for some 60 years. Games with individual players are the subject of over 20 published investigations. Moreover, there are many popular commercial anagram games such as Scrabble. Recently, cooperative team play of anagram games has been studied experimentally. With all of the experimental work and the popularity of such games, it is somewhat surprising that very little modeling of anagram games has been done to predict player behavior/actions in them. We devise a cooperative group anagram game and develop an agent-based modeling and simulation framework to capture player interactions of sharing letters and forming words. Our primary goals are to understand, quantitatively predict, and explain individual and aggregate group behavior, through simulations, to inform the design of a group anagram game experimental platform.
Running contests has been an effective way to solicit efforts from a large pool of participants. Existing research mostly focuses on small contests that typically consist of two or several perfectly rational agents. In practice, however, agents are often founded in complex environments that involve large numbers of players, and they usually use thresholding policies to make decisions. Despite the fact, there is a surprising lack of understanding of how contest factors influence their outcomes. Here, we present the first simulation analysis on how parameters of the contest success function, the population dynamics, and the agents' cutoff policies influence the outcomes of the contests with thresholding agents that are non-cooperative. Experimental results demonstrate that stakeholders can design (approximately) optimal contests to satisfy both their interests and the agents' by choosing a relatively low bias factor. Our work brings new insights into how to design proper competitions to coordinate thresholding agents.
As AI advances and becomes more complicated, it becomes necessary to study the safety implications of its behavior. This paper expands upon prior AI-safety research to create a model to study the harmful outcomes of multi-agent systems. In this paper, we outline previous work that has highlighted multiple aspects of AI-safety research and focus on AI-safety systems in multi-agent systems. After overviewing previous literature, we present a model focused on flash clashes, a concept often found in economics. The model was constructed using an interdisciplinary approach that includes game theory, machine learning, cognitive science and systems theory to study ”flash crashes” in complex human-AI systems. We use the model to study a complex interaction between AI-agents, and our results indicate the multi-agent system in question is prone to cause flash crashes.
Hospital admission and discharge dynamics facilitate pathogen transmission among individuals in communities, hospitals, nursing homes, and other healthcare facilities. We developed a microsimulation to simulate this movement, as patients are at increased risk for healthcare-associated infections, antibiotic exposure, and other health complications while admitted to healthcare facilities. Patients can also serve as a source of infection throughout the healthcare network as they move locations. This microsimulation is a base model that can be enhanced with various disease-specific agent-based health modules. We calibrated the model to simulate patient movement in North Carolina, where over 1 million hospital admissions occur annually. Each patient originated from a unique starting location and eventually transferred to another healthcare facility or returned home. Here, we describe our calibration efforts to ensure an accurate patient flow and discuss the necessary steps to replicate this model for other healthcare networks.
Disease Spread Simulation to Assess the Risk of Epidemics During the Global Mass Gathering of Hajj Pilgrimage
Sultanah Alshammari (King Abdulaziz University, University of North Texas) and Harsha Gwalani, Joseph E. Helsing, and Armin R. Mikler (University of North Texas)
Global mass gatherings can pose a risk for communicable disease outbreaks. In these events, millions of people gather at a specific location over a specific period of time from different regions of the world. Such settings have the potential to import and/or export infectious diseases to and from the host countries by international participants. Planning and preparing for public health risks at global mass gatherings is a challenging and complex process. Advanced risk assessment tools are important to identify potential disease outbreaks. In this study, we propose a computational epidemic simulation framework to simulate disease transmission from the arrival, to the departure of international participants in the global event of Hajj. Computational simulations of disease spread in global mass gatherings provide public health authorities with powerful tools to assess the implications of these events and to evaluate the efficacy of prevention and control strategies to reduce their potential impacts.
We present a method to apply simulations to the tracking of a live event such as an evacuation. We assume only a limited amount of information is available as the event is ongoing, through population-counting sensors such as surveillance cameras. In this context, agent-based models provide a useful ability to simulate individual behaviors and relationships among members of a population; however, agent-based models also introduce a significant data-association challenge when used with population-counting sensors that do not specifically identify agents. The main contribution of this paper is to develop an efficient method for managing the combinatorial complexity of data association. The key to our approach is to map from the state-space to an alternative correspondence-vector domain, where the measurement update can be implemented efficiently. We present a simulation study involving an evacuation over a road network and show that our method allows close tracking of the population over time.
Validation and Evaluation of Emergency Response Plans through Agent-based Modeling and Simulation
Joseph E. Helsing and Harsha Gwalani (University of North Texas); Sultanah M. Alshammari (King Abdulaziz University, University of North Texas); and Armin R. Mikler (University of North Texas)
Biological emergency response planning plays a critical role in protecting the public from possible devastating results of sudden disease outbreaks. These plans describe the distribution of medical countermeasures across the affected region using limited resources within a restricted time window. The ability to determine that such plans will be feasible, in terms of successfully providing service to affected populations within the time limit is crucial. Current efforts like live drills and training to validate plans may not test plan activation at the appropriate scale or account for dynamic real-time events. This paper presents Validating Emergency Response Plan Execution Through Simulation (VERPETS), a novel computational system for the agent-based simulation of biological emergency response plan activation. This system integrates raw road network, population distribution, and emergency response plan data and simulates the traffic in the affected region using SUMO, or Simulations of Urban Mobility.
Coastal flooding is the most expensive type of natural disaster in the United States. Policy initiatives to mitigate the effects of these events are dependent upon understanding flood victim responses at an individual and municipal level. Agent-Based Modeling (ABM) is an effective tool for analyzing community-wide responses to natural disaster, but the quality of the ABM’s performance is often challenging to determine. This paper discusses the complexity of the Protective Action Decision Model (PADM) and Protection Motivation Theory (PMT) for human decision making regarding hazard mitigations. A combined (PADM/PMT) model is developed and integrated into the MASON modeling framework. The ABM implements a hind-cast of Hurricane Sandy’s damage to Sea Bright, NJ and homeowner post-flood reconstruction decisions. It is validated against damage assessments and post-storm surveys. The contribution of socio-economic factors and built environment on model performance is also addressed and suggests that mitigation for townhouse communities will be challenging.
While mass shootings in schools plagues the campus security of the United States, China, due to its gun ban, instead deals with knife wielding attackers. Over the past three years, more than 30 stabbings have occurred in school campuses, with hundreds of people being injured or killed. This paper computationally examines the consequences of people’s actions during a university campus attack using a combination of agent-based simulation and GIS mapping. Experimental results suggest that when facing an attacker with a knife, people should team up and attempt to subdue the attacker, whereas when facing an attacker with a gun, people should flee the area and wait for law enforcement to arrive if they cannot subdue the attacker quickly.
Panel: Credible Agent-based simulation: An Illusion or Only a Step Away?
Chair: Bhakti Stephan Onggo (University of Southampton)
Credible Agent-based Simulation - An Illusion or Only a Step Away?
Bhakti Stephan Onggo (University of Southampton), Levent Yilmaz (Auburn University), Franziska Klügl (Örebro University), Takao Terano (Chiba University of Commerce), and Charles M. Macal (Argonne National Laboratory)
During the World Café activity at the 2018 Winter Simulation Conference, we discussed Agent-based Simulation (ABS) credibility. The topic is important since credible ABS leads to an impact on society whereby ABS is implemented by users and they can benefit from it. This paper presents the perspective of three academic panelists and a practitioner on the credibility of ABS. The discussion reveals that the increasing use of ABS models to explain social phenomena or systems that exhibit emergent behavior pose a challenge for model credibility. Several points and suggestions are raised by the panelists, including evaluating ABS model credibility via its explanatory power, the multi-dimensionality of credibility and the role of software engineering approaches.
Schelling’s social segregation model has been extensively studied over the years. A major implication of the model is that individual preferences of similarity lead to a collective segregation behavior. Schelling used Agent-Based Modeling (ABM) with uni-dimensional agents. In reality, people are multidimensional. This raises the question of whether multi-dimensionality can boost stability or reduce segregation in society. In this paper, we first adopt ABM to reconstruct Schelling’s original model and discuss its convergence behaviors under different threshold levels. Then, we extend Schelling’s model with multidimensional agents and investigate convergence behaviors of the model. Results suggest that if agents have high levels of demand for identical neighbors, the society might become less stable or even chaotic. Also, several experiments suggest that multidimensional agents are able to form a stable society that is not segregated, if agents prefer to stay adjacent to not only "identical" but also "similar" neighbors.
We present a modelling approach to investigate the evolution of violent events on the forced displacement of people in affected countries. This work is performed in the context of the EU-funded HiDALGO Centre of Excellence, where we seek to establish more scalable and accurate models for migration prediction. Such a hybrid simulation approach is necessary as we need to understand how conflicts may evolve if we are to forecast the escape of people from future conflicts. To accomplish this, we couple a new model for conflict propagation with existing forced migration models. We explore the implications of our setup by studying the effect of different conflict progressions on the forced displacement of people, using an established model of the Mali 2012 conflict. We conclude that accurately predicting conflict evolutions is a key determinant in the outcomes of forced migration simulations, particularly if such forecasts are made over longer periods.
Service design deals with methods to organize people, organizations, infrastructure, communication and resources such that macro outcome parameters of the service are achieved while also ensuring excellent individual customer experience. Services are complex, dynamic processes engaging service deliverer and customer over several interactions over multiple touchpoints. Thus designing a service well requires an understanding of the impact of the dynamics of service delivery on service outcome parameters and customer experiences. We discuss a fine-grained agent based simulation approach to service design which will allow services to be simulated in-silico. Fine-grained agent models allow us to understand the macro effect of a service design and the persona level user experiences over multiple customer touchpoints. To model the user experience we use a need based behavior model, influenced by advances in Maslow’s need based hierarchy. We demonstrate these ideas on an example from the air travel domain.
We consider the problem of estimating the output variance in simulation analysis that is contributed from the statistical errors in fitting the input models, the latter often known as the input uncertainty. This variance contribution can be written in terms of the sensitivity estimate of the output and the variance of the input distributions or parameters, via the delta method. We study the direct use of this representation in obtaining efficient estimators for the input-contributed variance, by using finite-difference and random perturbation to approximate the gradient, focusing especially in the nonparametric case. In particular, we analyze a particular type of random perturbation motivated from resampling that connects to an infinitesimal jackknife estimator used in bagging. We illustrate the optimal simulation allocation and the simulation effort complexity of this scheme, and show some supporting numerical results.
We establish the asymptotic validity of a class of sequential stopping rules when applying standardized time series (STS) to construct fixed-width confidence intervals (CI). The STS CI construction avoids requiring a consistent variance estimator, which is attractive to a class of steady-state simulation problems in which variance estimation is difficult. We quantify the asymptotic distribution of STS at stopping times as the prescribed half-width of the CI approaches zero. This provides us with the appropriate scaling parameter for the CI in the sequential stopping setting.
We investigate some theoretical properties of kernelized control functionals (CFs), a recent technique for variance reduction, regarding its stability when applied to subsets of input distributions or biased generating distributions. This technique can be viewed as a highly efficient control variate obtained by carefully choosing a function of the input variates, where the function lies in a reproducing kernel Hilbert space with known mean thus ensuring unbiasedness. In large-scale simulation analysis, one often faces many input distributions for which some are amenable to CFs and some may not due to technical difficulties. We show that CFs retain good theoretical properties and lead to variance reduction in these situations. We also show that, even if the input variates are biasedly generated, CFs can correct for the bias but with a price on estimation efficiency. We compare these properties with importance sampling, in particular a version using a similar kernelized approach.
This paper presents a spline-based input modelling method for inferring the intensity function of a nonhomogeneous Poisson process (NHPP) given arrival-time observations. A simple method for generating arrivals from the resulting intensity function is also presented. Splines are a natural choice for modelling intensity functions as they are smooth by construction, and highly flexible. Although flexibility is an advantage in terms of reducing the bias with respect to the true intensity function, it can lead to overfitting. Our method is therefore based on maximising the penalised NHPP log-likelihood, where the penalty is a measure of rapid changes in the spline-based representation. An empirical comparison of the spline-based method against two recently developed input modelling techniques is presented, along with an illustration of the method given arrivals from a real-world accident and emergency (A&E) department.
A Poisson point process is characterized by its rate function. One family of rate-function
approximations is the authors' MNO--PQRS, which is based on a piecewise-quadratic function for each equal-width time interval. Fitting MNO--PQRS is based on the number of observed arrivals in each such interval. Therefore, the first step in fitting is to choose the number of intervals. Previously the authors discussed choosing the number of intervals for piecewise-constant rate functions. Here we extend those ideas to choosing the number of intervals for MNO--PQRS. Typically, the number of MNO--PQRS intervals is smaller than the piecewise-constant number. The results can be applied to non-Poisson arrival times; we do not investigate sensitivity to the Poisson assumptions.
In this paper, we generalize the variational Bayesian inference-based Gaussian process (VBGP) modeling approach for handling large-scale heteroscedastic datasets. VBGP is suitable for simultaneously estimating the underlying mean and variance functions with a single simulation output available at each design point. To improve the scalability of VBGP, we consider building distributed VBGP (DVBGP) models and their hierarchical versions by partitioning a dataset and aggregating individual subset VBGP predictions based on the idea of “transductive combination of GP experts.” Numerical evaluations are performed to demonstrate the performance of the DVBGP models from which some insights are derived.
We consider stochastic gradient estimation when only noisy measurements of the function are available. A standard approach is to use the finite difference method or its variants. While natural, it is open to our knowledge whether its statistical accuracy is the best possible. This paper argues that this is nearly so in a well-defined minimax sense. In particular, we show that central finite-difference is an optimal zeroth-order derivative estimator among the class of linear estimators, over the worst-case scenario among a wide class of twice differentiable functions. This optimality is achieved exactly for any finite sample in the single-dimensional case, and nearly (up to a multiplicative factor) in the multi-dimensional case. We also show that the same estimator is nearly optimal among the much larger class of all (nonlinear) estimators. We utilize elementary techniques for the linear minimax results and Le Cam's method for the general minimax counterparts.
In recent years, Monte Carlo estimators have been proposed that can estimate the ratio of two expectations without bias. We investigate the theoretical properties of a Taylor-expansion based estimator of the reciprocal mean of a non-negative random variable. We establish explicit expressions for the computational efficiency of this estimator and obtain optimal choices for its parameters. We also derive corresponding practical confidence intervals and show that they are asymptotically equivalent to the maximum likelihood (biased) ratio estimator as the simulation budget increases.
We consider using simulation to estimate the mean hitting time to a set of states in a regenerative process. A classical simulation estimator is based on a ratio representation of the mean hitting time, using crude simulation to estimate the numerator and importance sampling to handle the denominator, which corresponds to a rare event. But the estimator of the numerator can be inefficient when paths to the set are very long. We thus introduce a new estimator that expresses the numerator as a sum of two terms to be estimated separately. We provide theoretical analysis of a simple example showing that the new estimator can have much better behavior than the classical estimator. Numerical results further illustrate this.
We compare two approaches for quantile estimation via randomized quasi-Monte Carlo
(RQMC) in an asymptotic setting where the number of randomizations for RQMC grows large but the size of the low-discrepancy point set remains fixed. In the first method, for each randomization, we compute an estimator of the cumulative distribution function (CDF), which is inverted to obtain a quantile estimator, and the overall quantile estimator is the sample average of the quantile estimators across randomizations. The second approach instead computes a single quantile estimator by inverting one CDF estimator across all randomizations. Because quantile estimators are generally biased, the first method leads to an estimator that does not converge to the true quantile as the number of randomizations goes to infinity. In contrast, the second estimator does, and we establish a central limit theorem for it. Numerical results further illustrate these points.
Array-RQMC has been proposed as a way to effectively apply randomized quasi-Monte Carlo (RQMC) when simulating a Markov chain over a large number of steps to estimate an expected cost or reward. The method can be very effective when the state of the chain has low dimension. For pricing an Asian option under an ordinary geometric Brownian motion model, for example, Array-RQMC reduces the variance by factors in the millions. In this paper, we show how to apply this method and we study its effectiveness in case the underlying process has stochastic volatility. We show that Array-RQMC can also work very well for these models, even if it requires RQMC points in larger dimension. We examine in particular the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models, and we provide numerical results.
Hedging-based control policies release a job into the system so that the probability of a job completing by its deadline is acceptable; job release decisions are based on quantile estimates of the job lead times. In multistage systems, these quantiles cannot be calculated analytically. In such cases, simulation can provide useful estimates, but computing a simulation-based quantile at the time of a job release decision is impractical. We explore a metamodeling approach based on efficient experiment design that can allow, after an offline learning phase, a metamodel estimate for the state-dependent lead time quantile. This allows for real time control if the metamodel is accurate, and computationally fast. In preliminary testing of a three-stage production system we find high accuracy for quadratic and cubic regression metamodels. These preliminary findings suggest that there is potential for metamodel-based hedging policies for real time control of manufacturing systems.
Track Coordinator - Aviation Modeling and Analysis: Miguel Mujica Mota (Amsterdam University of Applied Sciences), Michael Schultz (Institute of Logistics and Aviation, Dresden University of Technology)
Aviation Modeling and Analysis
Ground and Airspace Operations
Chair: Sameer Alam (Nanyang Technological University)
Bus Fleet Size Dimensioning in an International Airport Using Discrete Event Simulation
Eduardo Carbajal (Pontificia Universidad Católica del Perú) and François Marmier and Franck Fontanili (University of Toulouse)
This papers describes a detailed discrete event simulation model integrated to a transactional database of flights to model operations of the bus fleet required to handle passenger and crew movements from inbound and outbound flights from gates without passenger loading bridges (PLB) to aircrafts in remote landing locations in an international airport in Latin America. The objective of the model designed is to represent to the closest extension the frame of bus operations including operational policies, capacities, inside road movements, restrictions and corrective maintenances so it can be used to determine the optimum number of buses required to satisfy the service operation levels specified by the airport management . A simulation model was selected as the best tool to model this situation because of the many stochastic variables in the airport process and the aim to build in a posterior stage different scenarios to determine the optimum fleet size.
The sUAS Adoption and Operations model incorporates the impacts of public perception of small Unmanned Aircraft System (sUAS) operations and policy implementation into a holistic model to complement sUAS forecasts. The model is intended to help policy-makers, regulators, and analysts understand key drivers and test the impact of policy, perception, and safety on adoption and operations. This paper provides an overview of the model, its relevance to Federal Aviation Administration (FAA) forecasting of sUAS adoption and operations, and a case study to demonstrate the “what if” capability of the model.
STTAR: a Simheuristics-enabled Scheme for Multi-stakeholder Coordination of Aircraft Turnaround Operations
Maurizio Tomasella, Alexandra Clare, and Yagmur Simge Gök (University of Edinburgh); Daniel Guimarans (Monash University); and Cemalettin Ozturk (United Technologies Research Centre Ireland Ltd.)
Aircraft turnaround operations involve all services to an aircraft (e.g. passenger boarding/disembarking, re-fuelling, deicing) between its arrival and immediately following departure. The aircraft, parked at its stand, witnesses a number of service providers move around it to perform their duties. These companies run substantially independent operations, working for different airlines/flights within a confined area where many resources, including physical space itself, have inescapably to be shared. Inter-dependencies among service providers abound, and knock-on effects at disrupted times are rife. Coordination from the side of the airport operator is difficult. We envisage a tactical robust scheme whereby ground handlers and the airport operator cooperate, albeit indirectly, in the development of plans for the next day that are less likely to be impacted by at least the more frequent operational disruptions. The scheme is based on a simheuristic approach which integrates ad-hoc heuristics with a hybrid simulation model (agent-based/discrete-event).
Classification of weather impacts on airport operations will allow efficient consideration of local weather events in network-wide analysis. We use machine learning approaches to correlate weather data from meteorological reports and airport performance data contains of flight plan data with scheduled and actual movements as well as delays. In particular, we used unsupervised learning to cluster performance impacts at the airport and classify the respective weather data with recurrent and convolutional neural networks. It is shown that a classification is possible and allows estimates of delay including weather and flight plan data at an airport. This paper serves to illustrate a possible classification with machine learning methods and is the basis for further investigations on this topic. Our machine learning approach allows for an efficient matching decreased airport performance and the occurrence of local weather events.
In this paper, a general approach for modelling airport operations is presented. Airport operations have been extensively studied in the last decades ranging from airspace, airside and landside operations. Due to the nature of the system, simulation techniques have emerged as a powerful approach for dealing with the variability of these operations. However, in most of the studies, the different elements are studied in an individual fashion. The aim of this paper, is to overcome this limitation by presenting a methodological approach where airport operations are modeled together, such as airspace and airside. The contribution of this approach is that the resolution level for the different elements is similar therefore the interface issues between them is minimized. The framework can be used by practitioners for simulating complex systems like airspace-airspace ops or multi-airport systems. The framework is illustrated by presenting a case study that is being studied by the authors.
Airport Passenger Shopping Modeling and Simulation: Targeting Distance Impacts
Yimeng Chen and Cheng-Lung Wu (University Of New South Wales); Yi-Shih Chung (National Chiao Tung University); and Pau Long Lau, Nga Yung Agnes Tang, and Ngai Ki Ma (University Of New South Wales)
The ever-increasing importance of airport retail has encouraged both industry and academics to look into ways to increase airport retail revenue. Despite the growing interests in this topic, there is a lack of passenger shopping behavioral model. This paper aims to fill this gap and enhance our understanding of how the location of the shop affects passenger decision. This paper first investigates the possible passenger shopping behavioral model through an exploratory Eye-tracking exercise. Data was collected to calibrate and validate the behavioral model through the use of an Agent-Based Simulation Model. Shop locations were shown to have a significant impact on the passenger’s choice and overall retail revenue. The result shows that our proposed passenger shopping behavioral model can be suitably applied in our study context. Our model can assist airport retail planner in testing different scenarios to improve airport retail revenue cost-effectively.
Many data sets collected from physical processes or human engineered systems exhibit self-similar properties that are best understood from the perspective of multifractals. These signals fail to satisfy the mathematical definition of stationarity and are therefore incompatible with Gaussian-based analysis. Efficient algorithms for analyzing the multifractal properties exist, but there is a need to simulate signals that exhibit the same multifractal spectrum as an empirical data set. The following work outlines two different algorithms for simulating multifractal signals and addresses the strengths and weaknesses of each approach. We introduce a procedure for fitting the parameters of a multifractal spectrum to one extracted empirically from data and illustrate how the algorithms can be employed to simulate potential future paths of a multifractal process. We illustrate the procedure using a high-frequency sample of IBM’s stock price and demonstrate the utility of simulating multifractals in risk management.
Simulation of Supply Chains comprises huge amounts of data, resulting in numerous entities flowing in the model. These networks are highly dynamic systems, where entities’ relationships and other elements evolve with time, paving the way for real-time Supply Chain decision-support tools capable of using real data. In light of this, a solution comprising of a Big Data Warehouse to store relevant data and a simulation model of an automotive plant, are being developed. The purpose of this paper is to address the modelling approach, which allowed the simulation model to automatically adapt to the data stored in a Big Data Warehouse and thus adapt to new scenarios without manual intervention. The main characteristics of the conceived solution were demonstrated, with emphasis to the real-time and the ability to allow the model to load the state of the system from the Big Data Warehouse.
This paper reports on Natural Language Processing (NLP) as a technique to analyze phenomena towards specifying agent-based models (ABM). The objective of the ABM NLP Analyzer is to facilitate non-simulationists to actively engage in the learning and collaborative designing of ABMs. The NLP model identifies candidate agents, candidate agent attributes, and candidate rules all of which non-simulationists can later evaluate for feasibility. IBM’s Watson Natural Language Understanding (NLU) and Knowledge Studio were used in order to annotate, evaluate, extract agents, agent attributes, and agent rules from unstructured descriptions of phenomena. The software, and related agent-attribute-rule characterization, provides insight into a simple but useful means of conceptualizing and specifying baseline ABMs. Further, it emphasizes on how to approach the design of ABMs without the use of NLP by focusing on the identification of agent, attributes and rules.
Modeling & Simulation (M&S) and Machine Learning (ML) have been used separately for decades. They can also straightforwardly be employed in the same study by contrasting the results of a theory-driven M&S model with the most accurate data-driven ML model. In this paper, we propose a paradigm shift from seeing ML and M&S as two independent activities to identifying how their integration can solve challenges that emerge in a big data context. Since several works have already examined this interaction for conceptual modeling or model building (e.g., creating components with ML and embedding them in the M&S model), our analysis is devoted on three relatively under-studied stages: calibrating a simulation model using ML, dealing with the issues of large search space by employing ML for experimentation, and identifying the right visualizations of model output by applying ML to characteristics of the output or actions of the users.
The generative adversarial network (GAN) had been successfully applied in many domains in the past, the GAN network provides a new approach for solving computer vision, object detection and classification problems by learning, mimicking and generating any distribution of data. One of the difficulties in deep learning-based malware detection and classification tasks is lacking of training malware samples. With insufficient training data the classification performance of the deep model could be compromised significantly. To solve this issue, in this paper, we propose a method which uses the Deep Convolutional Generative Adversarial Network (DCGAN) to generate synthetic malware samples. Our experiment results show that by using the DCGAN generated adversarial synthetic malware samples, the classification accuracy of the classifier – a 18-layer deep residual network is significantly improved by approximately 6%.
Data-Driven Modeling and Simulation of Daily Activity Patterns of Elderlies Living in a Smart Home
Cyriac Azefack (Mines de Saint-Etienne, EOVI MCD); Raksmey Phan and Vincent Augusto (Mines de Saint-Etienne); Guillaume Gardin (EOVI MCD); Claude Montuy Coquard (Mutualite Française Loire - Haute Loire); Remi Bouvier (EOVI MCD, Mutualite Française Loire - Haute Loire); and Xiaolan Xie (Mines de Saint-Etienne)
Considering the globally aging population, one of the main challenges the healthcare system would have to face is to help elderly people stay at home in good health conditions and as long as possible. Recent advances in technologies answer this need with Smart Homes and Ambient Assisted Living programs. Data collected by the sensors are labeled and used to monitor the inhabitant Activities of Daily Living (ADLs). In this paper, we presented a new modeling framework of the smart home resident's behavior that mimics his behavior on multiple aspects and is able to simulate the resident daily behavior. Our approach is illustrated by a real-life case study application. Results show that the presented framework enables the modeling of human behavior living alone in a smart home without prior knowledge on the inhabitant. Such results enable further research on frailty prediction through simulation.
City infrastructure is becoming more complex and no one system is in complete isolation from others. A method of constructing digital twin cities with traffic, electrical, and telecommunication layers using open- and crowd-sourced data is outlined. The digital twin can be then perturbed to study how changes and failures in any one system can propagate to others.
Modeling and simulation was key in solving a prominent problem in the oil and gas industry; how to mitigate disruption from stoppages in a single pipeline and keeping customers supplied with minimal interruptions required for sustaining high service levels. The client commenced various expansion projects to add storage capacity to an existing pumping station. Concern was raised about future demand exceeding capacity. The client was interested in identifying the optimal number, size, and arrangement of the tankage at the new terminal in order to obtain the most cost efficient use of capital towards operational gains for their end customer. Benefits of the new terminal were expected to include improvements in the ability to receive mainline inputs to the terminal despite variability in sending batches to end customers, and conversely in the event of upstream upsets, providing consistency and predictability of batch deliveries to their end customers.
Even though operation management of emergency evacuation in the event of a disaster has become a major challenge for risk managers in healthcare facilities, it is still difficult for hospitals to develop effective evacuation plans considering patients’ mobility. In this study, a simulation comparison of simultaneous evacuation and sequential evacuation with staircase evacuation methods for the hospital studied was conducted, corresponding to the movement characteristics of patients who need transportation equipment and caregivers. As a result, it was clarified that simultaneous evacuation is more appropriate in this hospital where many patients requiring helpers stay at the upper floors.
The National Institutes of Health (NIH), a component of the U.S. Department of Health and Human Services, launched Optimize NIH to improve organizational effectiveness and performance in support of the NIH mission. Optimize NIH focused efforts for achieving efficiencies in business processes of Committee Management, Ethics, and Freedom of Information Act (FOIA) functions through enterprise-wide improvements. A computer modeling and simulation approach was utilized to develop a greater understanding of these business processes and inform recommendations for improvement. The project team developed process maps, gathered data, and developed a computer model and simulation that was used to better understand resources needed to process requests and how to best deploy those resources organization-wide. The modeling approach developed for this activity and lessons learned can be utilized to improve delivery of services in a wide-variety of programs throughout federal government.
Simulation-Based Improvement of the Discharge System in Highly Utilized Hospitals
John J. Case (United States Military Academy, Department of Systems Engineering) and Kimberly P. Ellis (Virginia Tech, Grado Department of Industrial and Systems Engineering)
In collaboration with the leadership team at a major metropolitan hospital in the United States, the research team develops alternatives to improve the hospital’s discharge system in order to increase the availability of bed space for new patients. Following a multi-day site visit, we utilized service time data provided by the hospital to develop a discrete event simulation (DES) model to evaluate different discharge improvement strategies, assuming fixed capacity. We found that a Percent-by-Time discharge strategy improved the hospital’s patient holding time by a minimum of 24% if implemented in all service areas. This zero-cost strategy requires no increases to hospital resources and can improve patient flow and timely access to healthcare in highly utilized hospitals.
Patient flow represents one of the largest opportunities for improvement in healthcare systems. An effective approach to improve patient flow is identifying and eliminating artificial variability. The most significant artificial variability is dysfunctional scheduled admissions which can be decreased or eliminated by load-smoothing. In this study, we develop a discrete-event simulation model using 12 months of data from the hospital database and obstetric unit logbooks in order to examine the relationship between the impact of load-smoothing for scheduled admissions on the patient flow performance metrics of the unit and patient volume of unscheduled admissions. The results show load-smoothing leads to lower patient waiting times but it does not affect the bed occupancy rate considerably. Moreover, reducing the patient volume of unscheduled admissions increases the reduction in the patient waiting time by load-smoothing while increasing the patient volume of unscheduled admissions decreases the reduction in the patient waiting time by load-smoothing.
Cerebrovascular accidents (CVA) are the main cause of death in Chile. Despite an effective procedure in existence (Thrombolysis), CVA rate for patients that received treatment fluctuates between 3 and 5%. Thrombolysis allow a time window of 4.5 hrs. for treatment effectiveness to avoid serious damage provoked by CVA. A low thrombolysis rate is a result of this time window and is considered the main reason for drug administration exclusion. For this, we identified the factors that affect patients to get proper treatment. To identify these factors, we have divided the process in pre and post-admission.
Following a survey of patients affected by CVA, we preliminary identified pre-admission factors in El Pino Hospital (EPH), while for post-admission patients we decided to follow an in-hospital simulation modeling process in EPH. The identification of these factors should allow us to mitigate these factors and increase thrombolysis rate for patients affected by a CVA.
Simulation models have been used in logistics to evaluate transportation of goods throughout a region, zone or country. When it comes to transporting patients between facilities in the healthcare industry, most simulation models center around activities at the origin facility or destination facility with the goal of refining hospital operations to prepare a patient for transfer or the in-take of arriving patients. Few models focus on the network of ambulances needed to complete these inter-facility transfers. This case study presents a method for evaluating dispatch locations within a geographically wide-spread network servicing a mix of rural and metropolitan areas. Furthermore, the simulation model takes into consideration the location of the facilities, the routes of travel between locations, and how external factors, such as weather conditions, might impact efficiency.
The steel stock yard for storing the purchased steel plates is the first step of shipbuilding and a space where sorting is performed to supply proper steel pates to the cutting process at the right time. Usually, it is difficult to supply all steel plates from one steelworks. Therefore, the deviation of the duration of plate procurement increases in the process of supplying steel plates from multiple steelworks, and the changes in production plans resulting from this affects the duration for which the steel plates stay in the stock yard. To address this problem, shipyards are researching on efficient management of steel plates in a limited space. In this study, a steel stock yard simulation model was constructed using discrete event simulation. Through this process, the steel plate delivery plan can be established in more detail, and a method of managing steel plates in the steel stock yard is proposed.
Simulation Model Risk-Based Scheduling Tool for High-Mix and Low-Volume Manufacturing Facility with Sequence-Dependent Setup Times
Jose E. Linares Blasini and Sonia M. Bartolomei Suarez (University of Puerto Rico Mayaguez) and Wandaliz Torres-Garcia (University of Puerto Rico at Mayaguez)
The use of deterministic scheduling for high-mix and low-volume manufacturing facility is inefficient and obsolete due to the inherent variability and occurrence of uncertain events in the manufacturing floor. This project develops a robust scheduling tool for a high-mix low-volume manufacturing facility with sequence-dependent setup times taking into consideration some of the inherent variability of the manufacturing floor. This scheduling tool is created using the Simio software package to enable the creation of schedules that adjust to schedulers’ needs while incorporating manufacturing constraints. This analytical tool was created to solve an existing problem for a local industry partner and it was investigated further through a case study. Ultimately, this work developed a simulation tool that models inherent variability in a high-mix low-volume manufacturing facility with sequence-dependent setup times and feeds into the generation of risk-based schedules allowing to forecast possible tardiness before it happens and update schedules dynamically as needed.
Recently, many production lines have become smarter and more automated since German industrial policy Industry 4.0. However, humans still make up a large portion of production lines that do not have standardized work or require difficult assembly skills. To improve the productivity of processes in which workers play an essential role, ergonomic studies use motion capture equipment to measure and evaluate worker posture. However, most of the research remains in the process design stage. In this study, a cyber-physical production system of workers is proposed for process design and operation. To implement this method, the study designs an architecture and conducts a pilot implementation at assembly lines in South Korea for verification and validation.
The Home Depot is using a configurable conveyor model to improve operations in their distribution centers. As their business grows, some of The Home Depot material handling automation is being pushed to the limits of its initial design capacity. This case study describes a case conveyor merge-sort model built by Roar Simulation for The Home Depot. The Home Depot uses simulation to develop plans and evaluate operational strategies at their Regional Distribution Centers (RDC). The model can be used to analyze the operations of eighteen (18) RDCs across North America. Analytics Professionals at The Home Depot use this model to perform their own analysis through a customized user interface and AutoMod’s analysis tool AutoStat. The model has helped The Home Depot make operational and equipment update decisions prior to seasonal peak demands on their facilities.
Collaborative robots (or cobots) are getting attention in manufacturing for high-mix low-volume production. When used in assembly operations, the process needs to be balanced to avoid idle times and bottlenecks are adjusted. In this article, the balancing of a human-robot collaborative assembly-cell is studied using an industrial case study. The balancing is carried out to evaluate the tasks that should be automated using cobot. It is different from conventional balancing problems given the fact that robots have varying speeds depending upon the distance from the operator, robots have unique skills and are good at some tasks and not good at some other and flexibility of cobots require that balancing is done more frequently. First, continues simulation is used to model human and the robot to estimate the cycle times. Secondly, an event-based simulation is used to introduce variables such as varying robot speeds and variability due to human factors.
To cope with the increasingly short life cycle of products, rapidly changing markets and customer needs, lots of manufacturing companies are trying to develop and change their own production lines. To resolve these situations, Reconfigurable Manufacturing System (RMS) emerged that can adjust rapid changes in the relevant hardware, software, and system structure with products’ own productivity. Reconfigurability evaluation is an important concept to determine the optimal alternative for both productivity and flexibility prior to changes in production or assembly lines in RMS. This paper proposes methodology to evaluate reconfigurability using modeling and simulation (M&S) with automotive assembly lines case.
A worldwide leading supplier of tubes for energy industry and other industrial applications dealt with a huge business challenge during 2012: the construction of a new facility in Bay City, Texas, which would operate with an automatic system for WIP handling operation in their warehouse, a revolutionary management system for the company and the industry. A multi-paradigm simulation was developed to analyze different scenarios, define the best investment plan and minimize risks. Numerous heuristics were also used to generate the requirements for the future WMS. During 2018, once the facility started its operations, the simulation model was adjusted to the real operation conditions, allowing the study of the strategy to close the gap between the current and the optimal warehouse management. During 2019, a new challenge came up: use the simulation for daily operative optimization, giving visibility to potential limitations in the WH handling capacity.
As ever more automated storage and retrieval systems (AS/RSs) are in frequent use in dynamic logistics environments around the world, using various operational approaches to evaluate AS/RS performance becomes a more complex challenge for logistics managers. In this paper, simulation models are developed considering the approaches of both operational priorities and storage location allocation. The aims are to visualize the dynamic operation processes easily and provide rapid performance evaluation of shuttle-vehicle-type mini-load AS/RSs in a dynamic logistics environment. The simulation results show that performance indicators such as the inventory turnover and the average total flow time for storage and retrieval operations under different operational-priority rules are applicable to methods for enhancing customer satisfaction by shortening the lead time from merchandise ordering to customer delivery.
History teaches us that commercial-supply chains (SCs) emerged and developed from military-SCs. However, the interest of researchers and practitioners has focused mainly on the former, thus creating a significant gap in the literature. Although several factors can be mentioned for this prevalent inattention and consequent lack of detailed studies upon military-SCs, the main reason for this gap relates to the access to reliable sources of information on how this type of SC operates in practice. Additionally, from the logistics point of view, the battle/operational situations are very attractive for doing risk analysis since none commercial-SC faces the level of challenges and dangers that a conventional military-SC tackles on a daily basis.
Maintaining military readiness is more challenging than ever: missions take place in a theatre with many actors (military and civilian) while fiscal reality and available resources limit live training opportunities with coalition partners. Simulation has been recognized by NATO as a critical technology to support training and the concept of “Mission Training through Distributed Simulation” (MTDS) is currently developed by several nations under the umbrella of the NATO Modelling and Simulation Group (NMSG). However, providing MTDS solutions is also a technical and organizational challenge in itself. In addition to agreed and validated interoperability standards, NMSG is also developing a Services based approach to deliver simulation. The M&S-as-a-Service (MSaaS) concept has been investigated and tested in recent experiments and exercises. This paper introduces the MSaaS approach towards a rapid implementation of exercise environments. The VIKING-18 exercise is presented as a use-case for MSaaS and our lessons learned will be discussed.
Roketsan Inc. is a leading institution in the Turkey for designing, developing and manufacturing rockets, missiles and weapon systems. The production system contains 2 different facilities with 50 workshops and approximately 1200 resources. This case study presents the simulation of propellant casting workshops where the rocket motors are produced. Besides standard simulation functions, our simulation model considers parallel used resources, limited resources, capacity constraints and shift system. The model analyses feasibility of rocket motor production according to annual production calendar. According to these analyses, the model reveals bottleneck resources. By these capabilities it is used as a decision support tool for annual production planning, shift scheduling, resource allocation and investment decisions in the factory.
Demand-driven material requirements planning (DDMRP) is considered as an effective method to help enterprises plan and manage inventories and materials, and it address the challenges of the increasing variability and complexity in the current market environment. Herein, a simulation model of a bicycle assembly company was constructed to verify and evaluate the DDMRP benefits, for implementing experiments that consider variability.
Inventory is a critical component of every supply chain. Typical supply chain inventory management strategies can be applied to biofuel inventory; however, depending on the feedstock, one might should follow a completely different paradigm. For example, when using single stem poplar as feedstock in a biofuel supply chain, the typical inventory policies do not apply. If poplar inventory is stored in the field, the inventory will continue to grow, thus increasing its value. When disease and drought conditions are present; however, they can affect a portion of all the fields in the system regardless of their age. While there are some poplar genomes that are more resistant to disease and drought, they may lack the ideal growth curve. In this case study, we look at the tradeoffs around growth curves, drought and disease tolerance, and storing inventory in the field.
Warehouse simulation and optimization have been used for many years to improve the internal operations, from receiving to shipping, in distribution centers. Most of these simulation environments are built on distributions and potential scenarios, rather than actual warehouse flow. This case study presents an improved method to optimize the warehouse using an actual outbound dataset that takes into consideration seasonality, SKU slotting, rack types, and pick path. Moreover, the generated simulation model is used for slotting analysis and optimization.
This case study describes one instance of a company achieving major cost avoidance by updating and upgrading its planning for responding to a catastrophic event, such as a natural disaster. Using large scale data collection, analysis, and scenario development and simulation, the company was able, in cooperation with its business loss insurance carrier, to optimize its planning and indemnification for catastrophic business disruption. As a result, the company was able to substantially reduce its exposure to financial loss, with the secondary benefit of substantially reducing the cost of its business loss insurance premiums. This was an example of big data, combined with simulations using the large-scale data sets, to examine multiple scenarios, select a likely, "average" one, and plan accordingly.
Discrete-event simulation is an established and popular technology for investigating the dynamic behavior of complex manufacturing and logistics systems. In addition to conventional simulation studies that focus on single model aspects answering project specific analysis questions, new developments in computational power, data processing, and availability of big data infrastructures enable the execution of broad scale simulation experimentation. In this work, we applied our developed methodology of knowledge discovery in simulation data onto an industrial case study of a production process from the AUDI AG. The main focus of interest in this case study was the routing for intralogistics material supply trains. We used broad scale experiment design in combination with data mining methods to investigate a very large number of possible routes. The goal was to find patterns and combinations for routes and associated factors, and to investigate how they affect the system performance.
Since 1990 Rijkswaterstaat (part of the Dutch Ministry of Infrastructure and Water Management) has used SIVAK, a simulation system to analyze ship handling of traffic at locks, narrow waterways and other waterway infrastructures. Among else it can be used to investigate risks in extreme situations on the waterway. Recently, SIVAK has been redeveloped by Systems Navigator, Witteveen+Bos and MARIN, in state-of-the-art object-oriented simulation software (SIMIO). The Simio model works seamlessly with Systems Navigator’s Scenario Navigator SAAS platform allowing Rijkswaterstaat to run SIVAK as a web-based solution available to users from any location, including simulation model execution, animation, scenario management, KPI reporting, debugging, documentation & scenario comparisons.
Complex, Intelligent, Adaptive and Autonomous Systems
Track Coordinator - Complex, Intelligent, Adaptive and Autonomous: Saurabh Mittal (The MITRE Corporation), Claudia Szabo (The University of Adelaide, University of Adelaide)
Complex, Intelligent, Adaptive and Autonomous Systems
Behavioral Modeling and Experimentation for Complex Systems
Chair: Saurabh Mittal (The MITRE Corporation)
Learning to Forget: Design of Experiments for Line-based Bayesian Optimization in Dynamic Environments
Jens Jocque, Tom Van Steenkiste, Pieter Stroobant, Rémi Delanghe, Dirk Deschrijver, and Tom Dhaene (Ghent University – imec)
Various scientific and engineering fields rely on measurements in 2D spaces to generate a map or locate the global optimum. Traditional design of experiments methods determine the measurement locations upfront, while a sequential approach iteratively extends the design. Typically, the cost of traveling between sample locations can be ignored, for example in simulation experiments. In those cases, the experimental design is generated using a point-based method. However, if traveling towards the next sample location incurs an additional cost, line-based sampling methods are favored. In this setting, the sampling algorithm needs to generate a route of measurement locations. A common engineering problem is locating the global optimum. In certain cases, such as fire hotspot monitoring, the location of the optimum dynamically changes. In this work, an algorithm is proposed for sequentially locating dynamic optima in a line-based setting. The algorithm is evaluated on two dynamic optimization benchmark problems.
This paper investigates the impact of behavioral modeling assumptions for a Complex Adaptive System (CAS) model. We hypothesize that behavioral models can be overconfident in their predictions due to the challenges of modeling behavior. Supporting this hypothesis, this paper discusses the challenges of modeling behavior, presents a CAS example problem to design an online dating app, models the dating app as a CAS, and investigates the impacts of different behavioral models on the design. This paper shows how similar behavioral models can have a significant impact on the simulation results. This paper highlights the challenge our community faces moving forward with valid behavioral models. Finally, we call for the community at large to address these challenges by collaboratively researching and comparing behavioral models so as to guide future modelers.
We introduce a deep Q-network (DQN) based model that addresses the dispatching and routing problems for autonomous mobile robots. The DQN model is trained to dispatch a small fleet of robots to perform material handling tasks in a virtual, as well as, in an actual warehouse environment. Specifically, the DQN model is trained to dispatch an available robot to the closest task that will avoid or minimize encounters with other robots. Based on a discrete event simulation experiment, the DQN model outperforms the shortest travel distance rule in terms of avoiding traffic conflicts, improving the makespan for completing a set of tasks, and reducing the mean time in system for tasks.
A robust, consistent and high capacity data network is critical in modern society to ensure that governments, businesses and individuals are able to communicate and coordinate disaster relief, just in time deliveries, or to organize how and where to meet friends. Existing terrestrial communication networks are susceptible to being disabled or destroyed by disasters and the use of satellites is costly and inflexible. We propose a communication protocol, called Communication Collective, to operate in a resource constrained, dynamic and contested environment with distributed, learning, adaptive, predictive, mobile agents. Each agent operative gathers information from the environment and the surrounding Communication Collective and independently weighs its options on how to behave to improve the performance of the collective as a whole.Our experimental analysis shows the potential of our approach, with improved delivered messages and significantly reduced latency.
Model-based and simulation-supported engineering based on the formalism of synchronous block diagrams is among the best practices in software development for embedded and real-time systems. As the complexity of such models and the associated computational demands for their simulation steadily increase, efficient execution strategies are needed. Although there is an inherent concurrency in most models, tools are not always capable of taking advantage of multi-core architectures of simulation host computers to simulate blocks in parallel. In this paper, we outline the conceptual obstacles in general and discuss them specifically for the widely used simulation environment Simulink. We present an execution mechanism that harnesses multi-core hosts for accelerating individual simulation runs through parallelization. The approach is based on a model transformation. It does not require any changes in the simulation engine, but introduces minimal data propagation delays in the simulated signal chains. We demonstrate its applicability in an automotive case study.
Natural disasters are notable for the high costs associated with responding to and recovering from them. In this paper we address the issue of critical resources allocation during natural disaster, that incorporates the level of importance of the effected region and cost parameter. Our risk reducing model can be applied to online stochastic environments in the domain of natural disasters. The framework achieves more efficient resource allocation in response to dynamic events and is applicable to problems where disaster evolves alongside the response efforts.
Track Coordinator - Cybersecurity: Jason Jaskolka (Carleton University), Dong Jin (Illinois Institute of Technology), Danda Rawat (Howard University), Sachin Shetty (Old Dominion University)
Cybersecurity
Panel: Simulation for Cyber Risk Management: Where Are We, and Where Do We Want to Go?
Chair: Sachin Shetty (Old Dominion University)
Simulation for Cyber Risk Management – Where are we, and where do we want to go?
Sachin Shetty (Old Dominion University), Indrajit Ray (Colorado State University), Nurcin Celik and Michael Mesham (University of Miami), Nathaniel Bastian (United States Military Academy), and Quanyan Zhu (New York University)
There is a dearth of simulation environments that conduct a comprehensive cyber risk assessment and provide insights that are accessible for decision makers and operational folks. During the Winter Simulation Conference 2019, a group of experts will discuss the challenges and opportunities for developing simulation platforms for cyber risk management. The panel will focus on issues with integrating technologies into simulation platforms, loss of fidelity due to lack of access to cyber datasets and complexity involved in representing cyber-physical systems. This paper is a collection of position papers of the participating experts supporting their viewpoints that will be captured in the panel discussion.
Chair: Alex Kline (U.S. Army Office of the Deputy Chief of Staff G-8)
Solving the Army’s Cyber Workforce Planning Problem using Stochastic Optimization and Discrete-Event Simulation Modeling
Nathaniel Bastian and Christopher Fisher (United States Military Academy), Andrew Hall (U.S. Military Academy), and Brian Lunday (Air Force Institute of Technology)
The U.S. Army Cyber Proponent (Office Chief of Cyber) within the Army's Cyber Center of Excellence is responsible for making many personnel decisions impacting officers in the Army Cyber Branch (ACB). Some of the key leader decisions include how many new cyber officers to hire and/or branch transfer into the ACB each year, and how many cyber officers to promote to the next higher rank (grade) each year. We refer to this decision problem as the Army's Cyber Workforce Planning Problem. We develop and employ a discrete-event simulation model to validate the number of accessions, branch transfers, and promotions (by grade and cyber specialty) prescribed by the optimal solution to a corresponding stochastic goal program that is formulated to meet the demands of the current force structure under conditions of uncertainty in officer retention. In doing so, this research provides effective decision-support to senior cyber leaders and force management technicians.
Multiple state and non-state actors have recently used social media to conduct targeted disinformation operations for political effect. Even in the wake of these attacks, researchers struggle to fully understand these operations and more importantly measure their effect. The existing research is complicated by the fact that modeling and measuring a persons beliefs is difficult, and manipulating these beliefs in experimental settings is not morally permissible. Given these constraints, our team designed an Agent Based Model that is designed to allow researchers to explore various disinformation forms of maneuver in a virtual environment. This model mirrors the Twitter Social Media Environment and is grounded in social influence theory. Having built this model, we demonstrate its use in exploring two disinformation forms of maneuver: 1) "backing" key influencers and 2) "bridging" two communities.
Smart Public Safety systems have become feasible by integrating heterogeneous computing devices to collaboratively provide public safety services. While the fog/edge computing paradigm promises solutions to address the shortcomings of cloud computing, like the extra communication delay and network security issues, it also introduces new challenges. From system architecture aspect, we adopt the service oriented architecture (SOA) based on monolithic framework is difficult to provide scalable and extensible services in a large-scale distributed Internet of Things (IoT)-based SPS system. Furthermore, traditional management and security solutions rely on a centralized authority, which can be the performance bottleneck or single point of failure. Inspired by the microservices architecture and blockchain technology, a Lightweight IoT based Smart Public Safety (LISPS) framework is proposed on top of the permissioned blockchain network. Through decoupling monolithic complex system into independent sub-tasks, the LISPS system possesses high flexibility in the design process and online maintenance.
This article proposes a novel discrete event simulation model for predicting cyber maintenance costs under multiple scenarios. In this study, the evolution of the computer hosts are modeled similar to the Susceptible-Infected-Removed (S-I-R) epidemiological model. A concept of a nested birth and death process is introduced in the context of vulnerability lifetime and its interaction with a host. The objectives of the model are to study the benefits and drawbacks of current scanning policy and maintenance policy, propose cost-effective alternatives and investigate the significance of celebrity vulnerabilities.
A community based fraud detection is one of the methods to ensure trustworthiness of Internet resources. The TrustSearch platform has been developed to provide community based fraud detection services. It allows Internet user to submit application reporting potential fraudulent Internet resources and relies on a consensus seeking algorithm to approve or reject the application. The system exhibits complex and dynamic behavior, and simulation is used to evaluate its performance and to determine appropriate operational parameters. The objective is to find an appropriate trade-off between evaluation accuracy and efficiency what is a characteristic challenge in distributed decision-making systems. An agent-oriented simulation model is developed and experimental studies are conducted. It has been shown that sufficiently high evaluation accuracy can be achieved and the results are remarkably robust. However, a relatively large number of participants is required. The community based platform uses blockchain technologies to reward participants for their contributions.
Simulation-based Blockchain Design to Secure Biopharmaceutical Supply Chain
Wei Xie (Northeastern University), Wencen Wu (San Jose State University), Bo Wang (Northeastern University), Jie You (QuarkChain Inc.), Zehao Ye (Northeastern University), and Qi Zhou (QuarkChain Inc.)
Bio-drugs grow rapidly and become one of the key drivers of personalized medicine and advancement of life sciences. As more cell and gene therapy products are introduced into the market and the globalization further expands, the complexity of biopharma supply chain increases dramatically. Built on two-layer QuarkChain, we introduce a new blockchain design to facilitate the development of reliable and efficient global biopharma supply chain. Since blockchain can improve the transparency and integrity of delivery processes, and facilitate the real-time monitoring and tractability, it can efficiently improve the drug safety. The proposed design supports dynamic and integrated biopharma supply chain risk management even if its complexity increases. We further introduce stochastic simulation to guide the blockchain design so that we can protect drug products from theft, temperature diversion, and counterfeiting, while improving the supply chain reliability, efficiency, and responsiveness. The preliminary empirical study demonstrates that our approach has promising performance.
Uncertainty is ubiquitous in almost every real world optimization problem. Stochastic programming has been widely utilized to capture the uncertain nature of real world optimization problems in many different aspects. These models, however, often fall short in adequately capturing the stochasticity introduced by the interactions within a system or a society involving human beings or sub-systems. Agent-based modeling, on the other hand, can efficiently handle such randomness resulted from the interactions among different members or elements of a systems. In this study, we develop a framework for stochastic programming optimization by embedding an agent-based model to allow uncertainties due to both stochastic nature of system parameters as well as the interactions among the agents. A case study is presented to show the effectiveness of the proposed framework.
Bicycle-sharing system (BSS) has attracted much attention due to its great success in providing a low-cost and environment-friendly alternative to traditional public transportation systems. In some BSSs comprised of stations with fixed docks, customer satisfaction can be measured by the availability of bikes for pick-ups and/or open docks for returns. However, it is quite common that the spatial balance of bike inventories will be broken due to customers’ behavior or frequent failures of bikes and docks. As a result, constant rebalancing and maintenance services are required to sustain adequate levels of customer satisfaction. In this research, a simulation framework is developed to optimize the rebalancing and maintenance activities while satisfying customers’ needs over a service area. An optimization model solved by Ant Colony Optimization is applied on the Citibike in New York City, which is considered as an example to validate the effectiveness and efficiency of the proposed simulation framework.
Our relationship with technology is constantly evolving, and that relationship is adapting even more quickly when faced with disaster. Understanding how to utilize human interactions with technology and the limitations of those interactions will be a crucial building block to contextualizing crisis data. The impact of scale on behavioral change analyses is an unexplored yet necessary facet of our ability to identify relative severities of crisis situations, magnitudes of localized crises, and total durations of disaster impacts. In order to analyze the impact of increasing scale on the identification of extreme behaviors, we aggregated Twitter data from Houston, Texas circa Hurricane Harvey across a wide range of scales. We found inversely related power law relationships between the identification of sharp Twitter activity bursts and sharp activity drop-offs. The relationships between these variables indicate the direct, definable dependence of social media aggregation analyses on the scale at which they are performed.
Evaluating the Performance of Maintenance Strategies: A Simulation-based Approach for Wind Turbines
Clemens Gutschi (Graz University of Technology), Moritz Graefe (Uptime Engineering GmbH), Nikolaus Furian (Graz University of Technology), Athanasios Kolios (Ocean & Marine Engineering University of Strathclyde), and Siegfried Voessner (Graz University of Technology)
The performance of multi-component systems is heavily influenced by the individual maintenance strategy
applied on each subsystem, its costs and the impact of subsystem fails on the performance of the whole
system. For an assessment of different combinations of maintenance strategies, we present a simulation-based
evaluation approach applied to an offshore wind farm, investigating the produced energy and levelized
costs of electricity. The evaluation is carried out by breaking wind turbines down into major subsystems,
applying different suitable maintenance strategies to them and monitoring the performance of the entire wind
farm. The investigated configurations include corrective, predictive and reliability-centered maintenance
strategies. Thereby we investigate limits regarding minimum and maximum performance as well as the
impact of a realistic application of monitoring systems on system performance.
As climate change approaches a point of irreversibility, it is becoming increasingly important to find ways of preventing food waste from reaching landfills and emitting greenhouse gases. Food rescue programs offer a means of simultaneously diverting surplus food from landfills and addressing food insecurity. Recently, some food rescue organizations in the U.S. have begun leveraging crowd-shipping to more efficiently transport surplus food from donors to food-insecure recipients. However, the success of such initiatives relies on achieving a critical mass of donor and crowd-shipper participation. This paper describes a conceptual agent-based model that was developed to evaluate the design parameters of a volunteer-based crowd-shipping system for food rescue. Preliminary experimental results demonstrate the importance of generating sufficient awareness and commitment among potential volunteers in the early stages of the program’s development to ensure consistent participation and service.
Avoiding “Day-zero”: A Testbed for Evaluating Integrated Food-energy-water Management in Cape Town, South Africa
Ke Ding and Jonathan M. Gilligan (Vanderbilt University) and George M. Hornberger (Vanderbilt University, Vanderbilt Institute for Energy and Environment)
Deep connections between water resources and food and energy production---the food-energy-water (FEW) nexus---complicate the challenge of sustainably managing an uncertain water supply. We present an agent-based model as a computational testbed for studying different approaches to managing the FEW nexus and apply the model to the 2017--2018 water crisis in Cape Town South Africa. We treat the FEW nexus connecting municipal water use by Cape Town residents, agricultural water use by vineyards, and hydroelectric generation. We compare two scenarios for responding to drought: business-as-usual (BAU), and holistic adaptive management (HAM), where BAU takes no action until the monthly supply is insufficient to meet demand, whereas HAM takes action by raising water tariffs when the reservoir storage level drops below its pre-drought monthly average. Simulation results suggest that holistic adaptive management can alleviate the impact of drought on agricultural production, hydropower generation, and the availability of water for residential consumption.
We analyse the equilibrium behaviour of a large network of banks in presence of incomplete information, where inter-bank borrowing and lending is allowed, and banks suffer shocks to assets. In a two time period graphical model, we show that the equilibrium wealth distribution is the unique fixed point of a complex, high dimensional distribution-valued map. Fortunately, there is a dimension collapse in the limit as the network size increases, where the equilibriated system converges to the unique fixed point involving a simple, one dimensional distribution-valued operator, which, we show, is amenable to simulation. Specifically, we develop a Monte-Carlo algorithm that computes the fixed point of a general distribution-valued map and derive sample complexity guarantees for it. We numerically show that this limiting one-dimensional regime can be used to obtain useful structural insights and approximations for networks with as low as a few hundred banks.
We develop and analyze an unbiased Monte Carlo estimator for a functional of a one-dimensional jump-diffusion process with a state-dependent drift, volatility, jump intensity and jump size. The approach combines a change of measure to sample the jumps with the parametrix method to simulate the diffusions. Under regularity conditions on the coefficient functions as well as the functional, we prove the unbiasedness and the finite variance property of the estimator. Numerical experiments illustrate the performance of the scheme.
Fractional Brownian motions (fBM) and related processes are widely used in financial modeling to capture the complicated dependence structure of the volatility. In this paper, we analyze an infinite series representation of fBM proposed in (Dzhaparidze and Van Zanten 2004) and establish an almost sure convergence rate of the series representation. The rate is also shown to be optimal. We then demonstrate how the strong convergence rate result can be applied to construct simulation algorithms with path-by-path error guarantees.
Coherent risk measures have received increasing attention in recent years among both researchers and practitioners. The problem of estimating a coherent risk measure can be cast as estimating the maximum expected loss taken under a set of probability measures. In this paper, we consider the set of probability measures is finite, and study the estimation of a coherent risk measure via an upper confidence bound (UCB) approach, where samples of the portfolio loss are simulated sequentially from one of the probability measures. We study in depth the so-called Grand Average estimator, and establish statistical guarantees, including its strong consistency, asymptotic normality, and asymptotic mean squared error. We also construct asymptotically valid confidence intervals.
This paper proves equivalences of portfolio optimization problems with negative expectile and omega ratio. We derive subgradients for the negative expectile as a function of the portfolio from a known dual representation of expectile and general theory about subgradients of risk measures. We also give an elementary derivation of the gradient of negative expectile under some assumptions and provide an example where negative expectile is demonstrably not differentiable. We conducted a case study and solved portfolio optimization problems with negative expectile objective and constraint (code and data are posted on the web).
Tail risk estimation for portfolios of complex financial instruments is an important enterprise risk management task. Time consuming nested simulations are usually required for such tasks: The outer loop simulates the evolution of risk factors, or the scenarios. Inner simulations are then conducted in each scenario to estimate the corresponding portfolio losses, whose distribution entails the tail risk of interest. In this paper we propose an iterative procedure, called Importance-Allocated Nested Simulation (IANS), for tail risk estimation. We tested IANS in a multiple-period nested simulation setting for an actuarial application. Our numerical results show that IANS can be an order of magnitude more accurate that a standard nested simulation procedure.
Rare-event probabilities and risk measures that quantify the likelihood of catastrophic or failure events can be sensitive to the accuracy of the underlying input models, especially regarding their tail behaviors. We investigate how the lack of tail information of the input can affect the output extremal measures, in relation to the level of data that are needed to inform the input tail. Using the basic setting of estimating the probability of the overshoot of an aggregation of i.i.d. input variables, we argue that heavy-tailed problems are much more vulnerable to input uncertainty than light-tailed problems. We explain this phenomenon via their large deviations behaviors, and substantiate with some numerical experiments.
We apply a generalized likelihood ratio (GLR) derivative estimation method in previous works to estimate quantile sensitivity of financial models with correlations and jumps. Examples illustrate the wide applicability of the GLR method by providing several practical settings where other techniques are difficult to apply, and numerical results demonstrate the effectiveness of the new estimator.
We study market design issues for heterogeneous assets in a setting with multiple assets and multiple traders. The new modeling feature is that a social planner can decide to shut down some asset markets so that no traders can trade these assets. We show that closing down some asset markets can increase the total volume of trades by all traders, because a smaller number of assets available to trade can reduce coordination failures between different traders, improving the overall matching efficiency. Using numerical simulation, we study the optimal numbers of markets to open under different scenarios. This has practical implications for the optimal design of session-based trading protocols in the corporate and municipal bond markets. This problem naturally leads to the use of simulation to select the best system from all possible market design choices. We implement simulation experiments to demonstrate our finding and analyze sensitivities to several market-relevant parameters.
Track Coordinator - Healthcare: Masoud Fakhimi (University of Surrey), Maria Mayorga (North Carolina State University), Jie Song (Peking University), Xiaolan Xie (École Nationale Supérieure des Mines de Saint-Étienne)
Healthcare
Agent-based Models in Healthcare
Chair: Dehghani Mohammad (Northeastern University)
An Agent-based Model of Hepatitis C Virus Transmission Dynamics in the Indian Context
Soham Das and Diptangshu Sen (Indian Institute of Technology Delhi), Ajit Sood (Dayanand Medical College and Hospital), and Varun Ramamohan (Indian Institute of Technology Delhi)
In this study, we develop a model of hepatitis C virus (HCV) transmission dynamics capable of analyzing the health, economic and epidemiological impact of treatment and large-scale screening in the Indian context. The model simulates the interaction of infected and uninfected agents in environments wherein key risk factors of HCV transmission operate. The natural history of disease is simulated using a previously published and validated Markov model. The agent interaction/transmission environments simulated by the model include a home environment for transmission via unprotected sex, a medical environment for transmission via unsafe medical practices, educational and social interaction environments for conversion of non-injecting drug user (IDU) agents to IDUs and transmission via sharing of injecting equipment among IDUs. The model is calibrated to current HCV and IDU prevalence targets. We present model calibration results and preliminary results for the impact of treatment uptake rates on HCV and IDU prevalence.
Providing quality emergency care is one of the biggest challenges faced in healthcare today. This article lays the groundwork for operating and planning emergency care provision in metropolitan environments using a system approach that goes beyond studying each emergency department in isolation. The approach consists of the development of an agent-based simulation using a bottom-up approach modeling patients, doctors, hospitals, and their interactions. The simulation is validated against real historical data of waiting times in the Stockholm region. Through experimentation with the simulation, changing the way patients choose emergency departments in metropolitan areas through the provision of information in real-time is shown to have generally a positive effect on waiting times and the quality of care. The simulation analysis shows that the effects are not uniform over the whole system and its agents.
Multi-objective Model Exploration of Hepatitis C Elimination in an Agent-based Model of People Who Inject Drugs
Eric Tatara, Nicholson T. Collier, and Jonathan Ozik (Argonne National Laboratory, University of Chicago); Alexander Gutfraind, Scott J. Cotler, and Harel Dahari (Loyola University Medical Center); Marian Major (US Food and Drug Administration); and Basmattee Boodram (University of Illinois at Chicago)
Hepatitis C (HCV) is a leading cause of chronic liver disease and mortality worldwide and persons who inject drugs (PWID) are at the highest risk for acquiring and transmitting HCV infection. We developed an agent-based model (ABM) to identify and optimize direct-acting antiviral (DAA) therapy scale-up and treatment strategies for achieving the World Health Organization (WHO) goals of HCV elimination by the year 2030. While DAA is highly efficacious, it is also expensive, and therefore intervention strategies should balance the goals of elimination and the cost of the intervention. Here we present and compare two methods for finding PWID treatment enrollment strategies by conducting a standard model parameter sweep and compare the results to an evolutionary multi-objective optimization algorithm. The evolutionary approach provides a pareto-optimal set of solutions that minimizes treatment costs and incidence rates.
Atherosclerotic cardiovascular disease (ASCVD) is among the leading causes of death in the US. While it is known that ASCVD has familial and genetic components, understanding the role of genetic testing in the prevention and treatment of the cardiovascular disease has been limited. To this end, we develop a simulation framework to estimate the risk for ASCVD events due to clinical and genetic factors. One controllable risk factor for ASCVD events is the cholesterol level of patients. Cholesterol treatment plans are modeled using Markov decision processes. By simulating the health trajectory of patients, we determine the impact of genetic testing in optimal cholesterol treatment plans. As precision medicine and genetic testing become increasingly important, having such a simulation framework becomes essential.
The time and money required to run clinical trials, as well as the cost effectiveness of technologies emerging from these trials are receiving increasing scrutiny. This paper explores the use of techniques inspired from fully sequential simulation optimization, based on Bayesian expected value of sampling information arguments, in the context of highly-sequential multi-arm trials. New allocation rules are shown to be useful for selecting which technology to assign a patient in a trial. They are based on clinical cost-benefit tradeoffs and the size of the population who benefits from the technology adoption decision.
The Effect of the Distribution of the Inverse Growth Rate on Pancreatic Cancer Progression
Lena Abu-El-Haija (German Jordanian University), Julie Ivy and Osman Ozaltin (North Carolina State University), and Walter Park (Stanford University Medical Center)
Pancreatic cancer is a low-incidence disease, where tumor progression studies using patient longitudinal data had limited sample sizes. Estimating the tumor inverse growth rate and its distribution are a challenge. Using a tumor progression model that incorporates the distribution of the inverse growth rate as the underlying assumption of the model, pancreatic cancer progression models were built assuming two distributions for the inverse growth rate: Uniform and Gamma. This study uses simulation to evaluate the effect of the tumor inverse growth rate distribution on the tumor progression models by examining tumor timelines. It was found that the tumor timeline is about nine months longer under the assumption that the inverse growth rate follows Gamma distribution. It was inconclusive whether tumor progression is faster or slower in older patients as the tumor progression models with the different underlying assumptions on the inverse growth rate yielded opposite results.
Advances in data and process mining algorithms combined with the availability of sophisticated information systems have created an encouraging environment for innovations in simulation modelling. Researchers have investigated the integration between such algorithms and business process modelling to facilitate the automation of building simulation models. These endeavors have resulted in a prototype termed Auto Simulation Model Builder (ASMB) for DES models. However, this prototype has limitations that undermine applying it on complex systems. This paper presents an extension of the ASMB framework previously developed by authors adopted for healthcare systems. The proposed framework offers a comprehensive solution for resources handling to support complex decision-making processes around hospital staff planning. The framework also introduces a machine learning real-time data-driven prediction approach for system performance using advanced activity blocks for the auto-generated model, based on live-streams of patient data. This prediction can be useful for both single and multiple healthcare units management.
This paper describes a mixed mode approach to simulating a global pandemic. It describes the architectural approach and design decisions as well as the data sources, development tools and processes in each of the three simulation domains - System Dynamics (SD), Discrete Event Simulation (DES) and Agent-Based Modeling (ABM). The encompassing model uses a network of SD models, each representing a 30 km-square grid cell of Earth’s surface, with attributes describing population density as well as economic, political, sanitary and healthcare capabilities. Governments and outbreak response teams are represented by software agents, making decisions about resource allocations and performing vaccination drives. A discrete event engine drives the SD models and the agents, as well as any prescribed response actions. Where possible – and it was almost always possible – Open Source data and software is used.
An ageing population directly affects the volume and structure of hospital demands while raising concerns about both short- and long-term medical service accessibility. In this context, simulations can be used to facilitate understanding about population dynamics and the mechanisms driving healthcare service demands. This study’s primary goal was to determine how demographic changes influenced the demand for inpatient hospital services using a hybrid simulation technique. The studied model was used to predict how changes in the age-gender population distribution would affect future hospital inpatient service use among 17 hospital located in a large administrative region in Poland. The population model was constructed as a system dynamics model while patient pathways were modeled using a discrete event approach. Results showed that the hybrid model enabled analyses and insights that were not delivered by either model when used separately.
Urban mental health challenges call for new ways of designing policies to address the ongoing mental health issues in cities. Policymaking for mental health in cities is extremely difficult due to the complex nature of mental health, the structure of cities, and their multiple subsystems. This paper presents a general system dynamic model of factors affecting mental health and a method to test the sensitivity of the model to policy options using an approach combining system dynamics and fuzzy cognitive maps. The method is developed and tested to evaluate policies built around feedback loops. The approach succeeded in identifying the factors that substantially improve the mental health of the city population for specific contexts. It also suggests the coordination needed between different subsystems to reach these objectives.
Building Global Research Capacity in Public Health: The Case of a Science Gateway for Physical Activity Lifelong Modelling and Simulation
Anastasia Anagnostou, Nana Anokye, Simon J. E. Taylor, Derek Groen, and Diana Suleimenova (Brunel University London) and Riccardo Bruno and Robetro Barbera (University of Catania)
Physical inactivity is a major risk factor for non-communicable disease and has a negative impact on quality of life in both high and low- and middle- income countries (LMICs). Increasing levels of physical activity is recognized as a strategic pathway to achieving the UN’s 2030 Sustainable Development Goals. Research can support policy makers in evaluating strategies for achieving this goal. Barriers limit the capacity of researchers in LMICs. We discuss how global research capacity might be developed in public health by supporting collaboration via Open Science approaches and technologies such as Science Gateways and Open Access Repositories. The paper reports on how we are contributing to research capacity building in Ghana using a Science Gateway for our PALMS (Physical Activity Lifelong Modelling & Simulation) agent-based micro-simulation that we developed in the UK, and how we use an Open Access Repository to share the outputs of the research.
The United Network for Organ Sharing (UNOS) has been using simulation models for over two decades to guide the evolution of organ allocation policies in the United States. UNOS kidney simulation model (KPSAM), which played a crucial role in the major policy changes of 2014 in the U.S. kidney allocation, is also made available to the general public as an executable file. However, this format offers little flexibility to its users in trying out different policy proposals. We describe the development of a discrete-event simulation model as an alternative to KPSAM. It is similar to KPSAM in incorporating many clinical and operational details. On the other hand, it offers more flexibility in evaluating various policy proposals and runs significantly faster than KPSAM due to its efficient use of modern computing technologies. Simulated results closely match actual U.S. kidney transplantation outcomes, building confidence in the accuracy and validity of the model.
The Emergency Department (ED) is an environment prone to high error rates with severe consequences. Prior studies report miscommunication contributes to 80% of the serious medical errors. Handoffs, transfer of patient care from one physician to another, are a common occurrence and are predisposed to errors as a result of interruptions and high workload. Moreover, the Institute of Medicine reported that a majority of treatment delays are a result of communication errors associated with shift change. A simulation model was developed to test various physician to patient assignment policies to minimize the number of handoffs and reduce the workload at the end of a shift. Using a policy that restricts a physician from receiving high acuity patients in the last two hours of the shift, as well as limits the maximum number of patients per physician to five, the number of handoffs can be reduced by as much as 22%.
Evaluating an Emergency Department Care Redesign: A Simulation Approach
Breanna P. Swan and Osman Ozaltin (North Carolina State University) and Sonja Hilburn; Elizabeth Gignac; and George McCammon, Jr. (Southeastern Health)
Complex interactions between workload variability, uncertain and increasing arrival rates, and resource constraints make it difficult to improve flow in emergency departments (EDs). This uncertainty causes crowded emergency departments, long patient lengths of stay, and burnout among care providers. One way to improve efficiency while maintaining high quality care is by switching from a siloed unit-based department to a team-based design or pod system. This paper seeks to compare a pod system against the unit-based design at Southeastern Health’s ED using a discrete event simulation. Robustness of the models under a selection of staffing models will be tested with increased arrival rates and changing workload mix of incoming patients. Ultimately, it is shown the pod system maintains quality of care metrics while increasing resource utilization, proving optimization of the pod system schedule and routing can improve flow in the ED.
Simulation models are often used to gain a better understanding of a system’s sensitivity to changes in the input parameters. Data gathered during simulation runs is aggregated to Key Performance Indicators (KPIs) that allow one to assess a model’s or system’s performance. KPIs do not provide a deeper understanding of the causes of the observed output because this is not their primary objective. By contrast, dynamic bottleneck methods both identify elements that yield the largest gain in productivity with increased availability and also visualize these elements over time to enable bottlenecks to be better understood. In this paper we discuss whether dynamic bottleneck detection methods can be utilized to identify, measure, and visualize causes of observed behavior in complex models. We extend standard bottleneck detection methods, and introduce the Activity-Entity-Impact-Method. The practicality of the method is demonstrated by an example model of a typical Emergency Department setting.
Simulation solution validation concerns the comparison between the expected and actual performance of a solution provided by a simulation model. Such a comparison might become challenging when not only the implementation of the solution changed the environment, but also the processes and data have changed. We illustrate this challenge using a case study at an Integrated Emergency Post (IEP), which is a collaboration between a general practitioners post and a hospital’s emergency department to provide out-of-hours emergency care. After performing a simulation study, our solution has been implemented, after which data has been gathered for two years. We validated the solution by performing various comparisons, using simulated and realized performance, under the original and changed data and processes, and with and without the proposed solution. We propose a solution validation framework to structure these comparisons, and provide key insights regarding solution validation, using our case study at the IEP.
Clinical Pathway Analysis using Process Mining and Discrete-event Simulation: An Application to Incisional Hernia
Raksmey Phan (Mines Saint-Etienne), Damien Martin (Medtronic), Vincent Augusto (Mines Saint-Etienne), and Marianne Sarazin (Institut Pierre Louis d'Epidémiologie et de Santé Publique)
An incisional hernia (IH) is a ventral hernia that develops after surgical trauma to the abdominal wall, a laparotomy. IH repair is a common surgery that can generate chronic pain, decreased quality of life, and significant healthcare costs caused by hospital readmissions. The goal of this study is to analyze the clinical pathway of patients having an IH using a medico-administrative database. After a preliminary statistical analysis, a process mining approach is proposed to extract the most significant pathways from the database. The resulting causal net is converted into a statechart model that can be simulated. The model is used to understand times of occurrence of complications and associated costs. It enables the simulation of what-if scenarios to propose an improved care pathway for patients who are the most exposed, using in particular prophylactic mesh at the time of abdominal wall closure during a laparotomy on hospitalization costs and readmissions.
Mental health disorders are on the rise around the world. Inadequate service provision and lack of access have led to wide gaps between the need for treatment and service delivery. Despite the popularity of Discrete-event Simulation (DES) in healthcare planning and operations, there is evidence of limited application of DES in planning for mental healthcare services. This paper identifies and reviews all the papers that utilize DES modelling to address planning and operations issues in mental healthcare services. The aim is to contribute a roadmap for the future application of DES in mental healthcare services, with an emphasis on planning and operations.
Simulation Models as Decision Support Tools in Practice
Chair: Navonil Mustafee (University of Exeter)
A Management Flight Simulator of an Intensive Care Unit
Daniel Garcia-Vicuña and Fermin Mallor (Public University of Navarre); Laida Esparza (Navarre Hospital Compound, UPNA); and Pedro Mateo (Universitiy of Zaragoza)
Management Flight Simulators (MFS) supply a simulated environment in which managers can learn from experience in a controlled setting. Although its use is usual in other areas, no such software has been developed to learn about the complexity of the Intensive Care Unit (ICU) management. This paper describes an MFS of ICUs which includes main features that distinguish it from other simulators such as the evolution of patients’ health status and the recreation of real discharge and admission processes. The mathematical model is a discrete event simulation model in which different types of patients arrive at the ICU (emergency and scheduled patients). The user manages the simulated ICU by deciding about their admission or diversion and which inpatients are discharged. The analysis of recorded data is used to detect controversial scenarios and to understand how physicians’ decisions are made.
Emergency Department (ED) overcrowding is a pervasive problem worldwide, which impacts on both performance and safety. Staff are required to react and adapt to changes in demand in real-time, while continuing to treat patients. These decisions and actions may be supported by enhanced system knowledge. This is an application of a hybrid modelling approach for short-term decision support in urgent and emergency healthcare. It uses seasonal ARIMA time-series forecasting to predict ED overcrowding in a near-future moving-window (1-4 hours) using data downloaded from a digital platform (NHSquicker). NHSquicker delivers real-time wait-times from multiple centers of urgent care in the South-West of England. Alongside historical distributions, this data loads the operational state of a real-time discrete-event simulation model at initialization. The ARIMA forecasts trigger simulation experimentation of ED scenarios including proactive diversion of low-acuity patients to alternative facilities in the urgent-care network, supporting short-term decision-making toward reducing overcrowding in near real-time.
Evaluating Community-based Integrated Health and Social Care Services: The SIMTEGR8 Approach
Antuela Tako, Stewart Robinson, and Anastasia Gogi (Loughborough University); Zoe Radnor (University of London); and Cheryl Davenport (Leicestershire County Council)
This paper introduces a new facilitated simulation approach, called SIMTEGR8, developed to evaluate the effectiveness of integrated community-based health and social care services. This involves developing and using simulation models, which serve as a catalyst for generating discussion about the effectiveness of the patient pathway and for identifying potential improvements to the service. The simulation analyst works jointly with different stakeholder groups: service providers, commissioners, and service users. Service users, a stakeholder group that can contribute to the knowledge generated in facilitated modelling sessions, have not been included in facilitated simulation studies reported so far in the literature. For illustration purposes, the Lightbulb project, a housing support service helping elderly and frail people in Leicestershire in the UK stay safe at home, is presented in this paper. The outcomes of the study and the challenges faced with involving patients in simulation projects are discussed.
Scheduling providers in outpatient specialty clinics is often done in a non-systematic fashion as a result of providers having varying clinic time assignments. This is particularly challenging in academic medical centers and large health systems where providers have responsibilities outside of clinical duties. This leads to inconsistent use of clinic space, staff, and other fixed resources in addition to high variability in operational performance measures. In this paper we present a discrete-event simulation model used to evaluate heuristics for scheduling providers within a specialty clinic. The simulation model is developed and validated based on a specialty clinic within University of Minnesota Health. The scheduling heuristics are evaluated across multiple operational performance measures based on relative improvements compared to test cases from actual schedules used in practice.
Long wait times is one of the most common complaints from patients visiting the glaucoma clinic at the Kellogg Eye Center at the University of Michigan (UM). Long wait times have also been reported as a barrier to glaucoma care in other clinics as well. To address this issue, we develop a discrete-event simulation model to identify bottlenecks in the clinic that cause the majority of patient wait time. Different policies in terms of resource supplementation, i.e. adding staff and the corresponding equipment and exam rooms, are then accordingly proposed. We evaluate each of them using our simulation model through a series of what-if experiments. The most beneficial policy, considering the trade-off between patient wait time and resource supplementation expense, is proposed to the clinic to carry out in practice.
Assessment of the Impact of Teledermatology using Discrete Event Simulation
Afafe Zehrouni (Ecole Nationale Superieure des Mines de Saint-Étienne (EMSE)), Vincent Augusto (Ecole Nationale Superieure des Mines de Saint- ´ Etienne (EMSE)), Tu Anh Duong (Hospital Henri Mondor), and Xiaolan Xie (Ecole Nationale Superieure des Mines de Saint-Étienne (EMSE))
Evolution of technology and the complexity of the medical system have contributed to the increasing interest in telemedicine. The purpose of this paper is to present a discrete event simulation model of the teledermatology process using the tool TelDerm. The logic of the simulation describes the process from the detection of the dermatological problem to its resolution. The scenarios reflect different changes in the flow in order to quantify the impact of telemedicine on the healthcare system. Several key performance indicators measure medical and administrative workload variations for all human resources involved. In addition, we assess the impact on the patient’s journey through the process.
Track Coordinator - Hybrid Simulation: David Bell (Brunel University London), Tillal Eldabi (Brunel University London), Antuela Tako (Loughborough University)
Hybrid Simulation
Panel: Toward Conceptual modeling for hybrid simulation: Setting the Scene
Chair: Antuela Tako (Loughborough University)
Panel: Towards Conceptual Modelling for Hybrid Simulation: Setting the Scene
Antuela Tako (Loughborough University), Tillal Eldabi (University of Surrey), Paul Fishwick (University of Texas at Dallas), Caroline Krejci (University of Texas at Arlington), and Martin Kunc (University of Southampton)
A recent review paper of hybrid simulation modelling has identified that conceptual modelling is one of the least developed stages in the modelling lifecycle. Admittedly, it is generally accepted that the same applies to conceptual modelling for single-method simulation. However, in the case of building a hybrid simulation we risk creating even more complex models. Furthermore, it doesn’t help that that there are no standard modelling approaches shared by the different modelling methods (DES, SD, ABS) to enable a common hybrid conceptual model. This panel paper discusses the state of art of conceptual modelling for hybrid simulation and further developments needed to support the design of hybrid simulations. The purpose of this panel is to initiate a discussion about conceptual modelling in HS, with the view to identifying improvements and needs for further research in this area.
Strategic planning using simulation is an increasing field of practice mostly driven by the big consulting firms. While System Dynamics is a widely used simulation method in strategic planning given its advantage on global aggregates and deterministic model, hybrid modelling can achieve similar popularity. This paper presents some suggestions on how to use hybrid modelling in strategic planning.
Agent Based Modeling (ABMs) and Fuzzy Cognitive Mapping (FCM) are complementary modeling techniques: the former represents interacting agents across a landscape over time but does not specify how to encapsulate subjective behaviors in agents, whereas the latter can model a subjective behavior but lacks the ability to scale it to a population or do it over time. These techniques are increasingly used together, particularly as hybrid models. We propose the first review of this emerging practice. We identified 31 articles, out of which combined the two techniques. Our analysis revealed three different high-level architectures to structure the co-existence of ABMs and FCMs, such as using an interface or embedding an FCM into each agent. Our review provides a snapshot of an emerging field, thus assembling the evidence-base to identify potential areas for future work, such as consolidating and standardizing software development efforts in a currently fragmented field.
Developing a simulation model of a complex system requires a significant investment of time, expertise and expense. In order to realize the greatest return on such an investment, it is desirable to extend the lifecycle of the simulation model as much as possible. Existing studies typically end after the `first loop' of the lifecycle, with the computer model suitable for addressing the initial requirements of the stakeholders. We explore extending the modeling lifecycle to a `second loop' by introducing an existing hybrid simulation model to a new group of stakeholders and further developing it to capture new requirements. With the aid of an example application, we explain how the hybrid model facilitated stakeholder engagement by closely reflecting the real world and how the model lifecycle has been successfully extended to maximize the benefit to Eurostar International Limited.
Advances in data and process mining algorithms combined with the availability of sophisticated information systems have created an encouraging environment for innovations in simulation modelling. Researchers have investigated the integration between such algorithms and business process modelling to facilitate the automation of building simulation models. These endeavors have resulted in a prototype termed Auto Simulation Model Builder (ASMB) for DES models. However, this prototype has limitations that undermine applying it on complex systems. This paper presents an extension of the ASMB framework previously developed by authors adopted for healthcare systems. The proposed framework offers a comprehensive solution for resources handling to support complex decision-making processes around hospital staff planning. The framework also introduces a machine learning real-time data-driven prediction approach for system performance using advanced activity blocks for the auto-generated model, based on live-streams of patient data. This prediction can be useful for both single and multiple healthcare units management.
This paper describes a mixed mode approach to simulating a global pandemic. It describes the architectural approach and design decisions as well as the data sources, development tools and processes in each of the three simulation domains - System Dynamics (SD), Discrete Event Simulation (DES) and Agent-Based Modeling (ABM). The encompassing model uses a network of SD models, each representing a 30 km-square grid cell of Earth’s surface, with attributes describing population density as well as economic, political, sanitary and healthcare capabilities. Governments and outbreak response teams are represented by software agents, making decisions about resource allocations and performing vaccination drives. A discrete event engine drives the SD models and the agents, as well as any prescribed response actions. Where possible – and it was almost always possible – Open Source data and software is used.
An ageing population directly affects the volume and structure of hospital demands while raising concerns about both short- and long-term medical service accessibility. In this context, simulations can be used to facilitate understanding about population dynamics and the mechanisms driving healthcare service demands. This study’s primary goal was to determine how demographic changes influenced the demand for inpatient hospital services using a hybrid simulation technique. The studied model was used to predict how changes in the age-gender population distribution would affect future hospital inpatient service use among 17 hospital located in a large administrative region in Poland. The population model was constructed as a system dynamics model while patient pathways were modeled using a discrete event approach. Results showed that the hybrid model enabled analyses and insights that were not delivered by either model when used separately.
During Summer 2018, the first author spent a summer in Exeter. This summer-long experience was funded in part by a Leverhulme Trust Visiting Professor grant award obtained the year before. During the time spent, there were several trips hosted by other UK institutions, a public lecture on modelling, and interaction with faculty at the University of Exeter, which was the host institution for the grant. Broadening participation in modelling was a key theme of this visit. The awardee found that he was spending significant time spanning normally distinct departments and faculty in a quest to better understand the diverse nature of modelling. We summarize the key milestones of the visit, and project how the summer experience might inform the discipline of modelling. Further, we provide a roadmap for other researchers who may wish to collaborate in applying for similar grant awards.
Combining Simulation with a Biased-Randomized Heuristic to Develop Parametric Bonds for Insurance Coverage Against Earthquakes
Christopher Bayliss (Universitat Oberta de Catalunya, IN3); Alejandro Estrada-Moreno (Universitat Rovira i Virgili); Angel A. Juan (Universitat Oberta de Catalunya); and Guillermo Franco and Roberto Guidotti (Guy Carpenter & Company, LLC)
The social and economic impact of natural catastrophes on communities is a concern for many governments and corporations across the globe. A class of financial instruments, parametric hedges, is emerging in the (re)insurance market as a promising approach to close the protection gap related to natural hazards. This paper focuses on the design of such parametric hedges, which have the objective of maximizing the risk transferred subject to a budget constraint. With Greece as a case study, one of the most seismic prone European regions, with limited seismic insurance penetration, this paper proposes a biased-randomized algorithm to solve the optimization problem. The algorithm hybridizes Monte Carlo simulation with a heuristic to generate a variety of solutions. A simulation stage allows for analyzing the payout distribution of each solution. Results show the impact of the problem resolution on the transferred risk and on the distribution of triggered payments.
The application of a hybrid discrete rate and population balance simulation model to a high shear agglomeration process provides in-depth and dynamic insight into how a non-homogeneous product behaves across the entire process in addition to the individual process component level. By tracking individual product attributes throughout the process, the customer can size equipment, design control systems, and optimize process parameters.
Panel: Hybrid Simulation Development: Is it Just Analytics?
Chair: David Bell (Brunel University London)
Hybrid Simulation Development – Is It Just Analytics?
David Bell and Derek Groen (Brunel University London), Navonil Mustafee (University of Exeter), Jonathan Ozik (Argonne National Laboratory), and Steffen Strassburger (Technische Universität Ilmenau)
Hybrid simulations can take many forms, often connecting a diverse range of hardware and software components with heterogeneous data sets. The scale of examples is also diverse with both the high-performance computing community using high-performance data analytics (HPDA) to the synthesis of software libraries or packages on a single machine. Hybrid simulation configuration and output analysis is often akin to analytics with a range of dashboards, machine learning, data aggregations and graphical representation. Underpinning the visual elements are hardware, software and data architectures that execute hybrid simulation code. These are wide ranging with few generalized blueprints, methods or patterns of development. This panel will discuss a range of hybrid simulation development approaches and endeavor to uncover possible strategies for supporting the development and coupling of hybrid simulations.
In many industrial manufacturing companies, energy has become a major cost factor. Energy aspects are included in the decision-making system of production planning and control to reduce manufacturing costs. For this priority, the simulation of production processes requires not only the consideration of logistical and technical production factors but also the integration of time-dependent energy flows which are continuous in nature. A hybrid simulation, using a continuous approach to depict the energy demand of production processes in combination with a discrete approach to map the material flows and logistic processes, shows the complex interactions between material flow and energy usage in production closer to reality. This paper presents a hybrid simulation approach combining System Dynamics, Discrete-Event and Agent-Based Simulation for energy efficiency analysis in production, considering the energy consumption in the context of planning and scheduling operations and applying it to a use-case scenario of mechanical processing of die-cast parts.
Typically, there is more than one possible answer to the question of how to model and simulate a problem. Combinations of different simulation paradigms and techniques are currently in vogue and potentially necessary to meet certain requirements. Besides clear advantages, model combinations tend to increase the complexity of the overall model building process. Even if different paradigms are coupled via a middleware, another use case might require reimplementing the connections. Additionally, users need increased effort to understand how the results have been generated in composed models. If it is not clear how data has been transformed from input to output, the trust in simulation is impaired. In order to counteract, a concept for reusable translation between submodels is proposed, leading to a better transparency regarding the data flows. We applied the High Level Architecture based approach to connect a macroscopic, a mesoscopic, and a microscopic traffic simulation.
The inland waterways in the United States (U.S.) are used to transport approximately 20% of America’s coal, 22% of U.S. petroleum products, and 60% of farm exports making these waterways a significant contributor to the U.S. multimodal transportation system. In this study, data about natural extreme events affecting inland waterways are collected and used to predict possible occurrences of such events in the future using a spatio-temporal statistical model. We also investigate the waterways disruptions effect on interconnected transportation systems using a simulation tool built on a statistical model. The developed methods are centered on inland waterways but can be used broadly for other local, regional and national infrastructures. A case study based on the Mississippi River and the McClellan–Kerr Arkansas River Navigation System (MKARNS) is provided to illustrate the use of the simulation tool in interdependence modeling and decision making for the operation of a multimodal transportation network.
Schelling’s social segregation model has been extensively studied over the years. A major implication of the model is that individual preferences of similarity lead to a collective segregation behavior. Schelling used Agent-Based Modeling (ABM) with uni-dimensional agents. In reality, people are multidimensional. This raises the question of whether multi-dimensionality can boost stability or reduce segregation in society. In this paper, we first adopt ABM to reconstruct Schelling’s original model and discuss its convergence behaviors under different threshold levels. Then, we extend Schelling’s model with multidimensional agents and investigate convergence behaviors of the model. Results suggest that if agents have high levels of demand for identical neighbors, the society might become less stable or even chaotic. Also, several experiments suggest that multidimensional agents are able to form a stable society that is not segregated, if agents prefer to stay adjacent to not only "identical" but also "similar" neighbors.
We present a modelling approach to investigate the evolution of violent events on the forced displacement of people in affected countries. This work is performed in the context of the EU-funded HiDALGO Centre of Excellence, where we seek to establish more scalable and accurate models for migration prediction. Such a hybrid simulation approach is necessary as we need to understand how conflicts may evolve if we are to forecast the escape of people from future conflicts. To accomplish this, we couple a new model for conflict propagation with existing forced migration models. We explore the implications of our setup by studying the effect of different conflict progressions on the forced displacement of people, using an established model of the Mali 2012 conflict. We conclude that accurately predicting conflict evolutions is a key determinant in the outcomes of forced migration simulations, particularly if such forecasts are made over longer periods.
Service design deals with methods to organize people, organizations, infrastructure, communication and resources such that macro outcome parameters of the service are achieved while also ensuring excellent individual customer experience. Services are complex, dynamic processes engaging service deliverer and customer over several interactions over multiple touchpoints. Thus designing a service well requires an understanding of the impact of the dynamics of service delivery on service outcome parameters and customer experiences. We discuss a fine-grained agent based simulation approach to service design which will allow services to be simulated in-silico. Fine-grained agent models allow us to understand the macro effect of a service design and the persona level user experiences over multiple customer touchpoints. To model the user experience we use a need based behavior model, influenced by advances in Maslow’s need based hierarchy. We demonstrate these ideas on an example from the air travel domain.
In this tutorial we present techniques for building valid and credible simulation models. Ideas to be discussed include the importance of a definitive problem formulation, discussions with subject-matter experts, interacting with the decision-maker on a regular basis, development of a written assumptions document, structured walk-through of the assumptions document, use of sensitivity analysis to determine important model factors, and comparison of model and system output data for an existing system (if any). Each idea will be illustrated by one or more real-world examples. We will also discuss the difficulty in using formal statistical techniques (e.g., confidence intervals) to validate simulation models.
An Introduction to Modeling and Simulation with (Python(P))DEVS
Yentl Van Tendeloo (University of Antwerp); Hans Vangheluwe (University of Antwerp, McGill University); and Romain Franceschini (University of Corsica Pasquale Paoli, University of Antwerp)
Discrete Event System Specification (DEVS) is a popular formalism for modeling complex dynamic systems using a discrete-event abstraction. At this abstraction level, a timed sequence of pertinent "events" input to a system (or internal timeouts) cause instantaneous changes to the state of the system. Main advantages of DEVS are its rigorous formal definition, and its support for modular composition. This tutorial introduces the Classic DEVS formalism in a bottom-up fashion, using a simple traffic light example. The syntax and operational semantics of Atomic (i.e., non-hierarchical) models are introduced first. Coupled (i.e., hierarchical) models are introduced to structure and couple Atomic models. We continue to actual applications of DEVS, for example in performance analysis of queueing systems. All examples are presented with the tool PythonPDEVS, though this introduction is equally applicable to others. We conclude with further reading on DEVS theory, DEVS variants, and DEVS tools.
How can you make your projects successful? Modeling can certainly be fun, but it can also be quite challenging. With the new demands of Smart Factories, Digital Twins, and Digital Transformation, the challenges multiply. You want your first and every project to be successful, so you can justify continued work. Unfortunately, a simulation project is much more than simply building a model – the skills required for success go well beyond knowing a particular simulation tool. A 35-year veteran who has done hundreds of successful projects shares important insights to enable project success. He also shares some cautions and tips to help avoid common traps leading to failure and frustration.
Parallel Discrete Event Simulation (PDES) is concerned with the parallel and distributed execution of discrete event simulation models. This tutorial reviews parallel computing and the properties of Discrete Event Simulation (DES) models and then examines the construction of PDES solutions that use the Time Warp Synchronization Mechanism. A review of the general challenges to building high-performance Time Warp Synchronized PDES on multi-core processors and clusters composed of multi-core processors is presented. These challenges include considerations of the general quantitative metrics exhibited by DES simulation models as well as a review of the impact that small overheads and serial portions of a program can have on the potential peak performance of a parallel solution. Finally, directions for future opportunities with heterogeneous computing, domain specific hardware solutions, and edge computing will be explored.
BPMN-Based Business Process Modeling and Simulation
Chair: Gregory Zacharewicz (IMT Mines Alès)
BPMN-based Business Process Modeling and Simulation
Paolo Bocciarelli (University of Rome Tor Vergata), Andrea D'Ambrogio (University of Roma TorVergata), and Andrea Giglio and Emiliano Paglia (University of Rome Tor Vergata)
A business process (BP) can be defined a set of tasks that are coordinately performed by an organization to achieve a business goal. M&S (Modeling & Simulation) techniques are widely and effectively adopted for BP analysis. A BP M&S approach is typically carried out by first building a simulation model (from a conceptual model of the BP under analysis), and then producing the software implementation of the simulation model, so to eventually execute the simulation code and get the results of interest. The standard language currently used to define BP models is the OMGs BPMN (Business Process Model and Notation). This paper presents a BPMN-based M&S approach that introduces a BPMN extension to specify BP simulation models as annotated BPMN models, and a domain-specific BP simulation language to specify and execute simulation model implementations, which can be seamlessly derived from annotated BPMN models by use of automated model transformations.
Chair: Joseph Youssef (American University of Beirut, IMT – Mines Alès)
The Fundamentals of Domain-specific Simulation Language Engineering
Simon Van Mierlo (University of Antwerp - Flanders Make vzw); Hans Vangheluwe (University of Antwerp - Flanders Make vzw, McGill University); and Joachim Denil (University of Antwerp - Flanders Make vzw)
Simulationists use a plethora of modelling languages. General-purpose languages such as C, extended with simulation constructs, give the user access to abstractions for general-purpose computation and modularization. The learning curve for experts in domains that are far from programming, however, is steep. Languages such as Modelica and DEVS allow for a more intuitive definition of models, often through visual notations and with libraries of reusable components for various domains. The semantics of these languages is fixed. While libraries can be created, the language's syntax and semantics cannot be adapted to suit the needs of a particular domain. This tutorial provides an introduction to modelling language engineering, which allows one to explicitly model all aspects –in particular, syntax and semantics– of a (domain-specific) modelling and simulation language and to subsequently synthesize appropriate tooling. We demonstrate the discussed techniques by means of a simple railway network language using AToMPM, a (meta)modelling tool.
Theory of Modeling and Simulation: Discrete Event & Iterative System Computational Foundations (Third Edition) follows its predecessors in providing an authoritative and complete theoretical solution for researchers, graduate students and working engineers interested in posing and solving problems using the tools of computational modeling and simulation. We discuss its main unifying concept, the iterative systems specification, that enables models at any level of structure or behavior, to be transparently mapped into the Discrete Event System Specification (DEVS) formalism. Emphasizing the integration of discrete event and continuous modeling approaches, iterative systems specification provides a sound system-theoretic basis for quantized system and distributed co-simulation supporting the co-existence of multiple formalisms in model components. This introductory exposition emphasizes basic intuitive concepts rather than theorems and proofs that support them. As a running example, we apply the iterative systems formalism to bridging the cognitive behavior to neural circuits gap.
Statecharts, introduced by David Harel in 1987, is a formalism used to specify the behavior of timed, autonomous, and reactive systems using a discrete-event abstraction. It extends Timed Finite State Automata with depth, orthogonality, broadcast communication, and history. Its visual representation is based on higraphs, which combine hypergraphs and Euler diagrams. Many tools offer visual editing, simulation, and code synthesis support for Statecharts. Examples include STATEMATE, Rhapsody, Yakindu, and Stateflow, each implementing different variants of Harels original semantics. This tutorial introduces modeling, simulation, testing, and deployment of Statecharts. We start from the basic concepts of states and transitions and explain the more advanced concepts of Statecharts by extending a running example (a traffic light) incrementally. We use Yakindu to model the example system. This is an updated version of the paper with the same name that appeared at the Winter Simulation Conference in 2018 (Van Mierlo and Vangheluwe 2018).
Undesired or unexpected properties are frequent as large-scale, complex systems with non-linear interactions are being designed and implemented to answer real-life scenarios. Modeling these behaviors in complex systems, as well as analysing the large amounts of data generated in order to determine the effects of specific behaviors remains an open problem. In this tutorial, we explore three main complex systems properties and how they can be modelled in well known scenarios.
Petri Net, a widely studied mathematical formalism, is a graphical notation for modeling systems. Petri Nets provide the foundation for modeling concurrency, communication, synchronization, and resource sharing constraints that are inherent to many systems. However, Petri Nets do not scale well when it comes to modeling and simulation of large systems. Colored Petri Nets (CPNs) extend Petri Nets with a high level programming language, making them more suitable for modeling large systems. The CPN language allows the creation of models as a set of modules in a hierarchical manner and permits both timed and untimed models. Untimed models are used to validate the logical correctness of a system, whereas timed models are used to evaluate performance. This tutorial introduces the reader to the vocabulary and constructs of both Petri Nets and CPNs and illustrates the use of CPN Tools in creating and simulating models by means of familiar simple examples.
In recent years, grid-shaped cellular models have gained popularity in this sense. Cellular Automata (CA) have been widely used with these purposes. The Cell-DEVS formalism is an extension to CA that can be used to build discrete-event cell spaces, improving their definition by making the timing specification more expressive. Different delay functions to specify the timing behavior of each cell, allowing the modeler to represent the timing complex behavior in a simple fashion. CD++ is a modeling and simulation tool that implements DEVS and Cell-DEVS formalisms. Here, we show the use of these techniques through a formal specification, and a variety of application examples, including basic models using CA, and varied extensions including triangular and hexagonal topologies.
Artificial Intelligence (AI) is currently on top of the hype regarding simultaneously research publications and industrial development. However, the current status of AI makes it quite far and different from the current understanding of Human intelligence. One suggestion that is made in this article is that Model-Driven approaches could be considered as an interesting avenue to complement classical visions of AI and to provide some missing features. Specifically, the use of Model-Driven Engineering tools (such as metamodel and model transformation) could benefit to the domain of AI by introducing a way to extend the apprehension of unknown situations. To support that proposal, an illustrative example is provided regarding the domain of risk and crisis management.
Co-simulation consists of the theory and techniques to enable global simulation of a coupled system via the composition of simulators for each of the components of the model. Despite the large number of applications and growing interest in the challenges, practitioners still experience difficulties in configuring their co-simulations. This tutorial introduces co-simulation of continuous systems, targeted at researchers that want to develop their own co-simulation units and master algorithms, using the Functional Mock-up Interface (FMI) Standard. This document is complemented by online materials (Claudio 2019) that allow the reader to experiment with different co-simulation algorithms, applied to the examples introduced here.
Making adjustments to a logistics network to keep it in good condition is a major challenge. Logistics assistance systems are regularly used to support this process. The authors have developed such a logistics assistance system that identifies, evaluates, and proposes promising actions to the decision-maker. A simheuristic approach, utilizing a data-driven discrete event simulation in combination with meta-heuristic algorithms is used for this purpose. A typical feature of such systems, however, is that the possible changes to the logistics network are predefined by the respective logistics assistance system. In order to address this aspect, the authors have developed a novel method that allows for modeling, integration, and simulation of user-generated actions. The method is based on a domain-specific language for the formal description of actions in logistics networks, allowing domain experts to model actions without having in-depth programming knowledge.
When planning logistics systems with multiple transport objects or systems, modeling requires the implementation of complex control logic to avoid collisions and deadlocks. This paper illustrates a procedure for the development of such control logic on the example of rail-based storage and retrieval units in combinations with lifts in the picking area of an industrial laundry. Typical collision situations and a possible solution approach for avoiding these are described. In addition, a discrete event simulation demonstrates which situations occur most frequently depending on warehouse dimensioning.
In production control with a stable order sequence, the assembly order is fixed in the form of the so-called pearl chain. However, parallelism and instabilities in the production process cause scrambling in the order sequence. The stability of the order sequence can be increased by using sequencing buffers between the individual production units. In the automobile industry, all sequencing buffers frequently share one large automated storage and retrieval system (AS/RS) with limited capacity. In existing approaches for dimensioning sequencing buffers, this shared capacity restriction is not considered, even though the optimal distribution of buffer capacity might be an important lever for increasing the sequence stability. Therefore, the focus of this paper is the identification of the optimal capacity distribution between several sequencing buffers using the same storage area. A greedy simulation-based improvement heuristic is developed which makes it possible to find promising solutions with a practical and intuitive optimization approach.
Despite the documented benefits of ridesourcing services, recent studies show that they can slow down traffic in the densest cities significantly. To implement congestion pricing policies upon those vehicles, regulators need to estimate the degree of the congestion effects. This paper studies a simulation-based approach to address the technical challenges arising from the system dynamics models and the congestion price optimization. To estimate the traffic conditions, we use a metamodel representation for traffic flow and a numerical method for data interpolation. To reduce the burden of replicating evaluation in stochastic optimization, we use a simulation-optimization approach to compute the optimal congestion price. This data-driven approach can potentially be extended to solve large-scale congestion pricing problems under uncertainty.
The car rental business is a multi-billion-dollar industry with ever-increasing competitiveness. Car rental companies must adapt dynamic pricing strategies to maximize revenues and operational efficiency. The aim of this study is to understand what pricing strategies work best for rental companies so as to achieve higher revenue for same-location pick-up and drop-off of rentals. With this goal in mind, we have modified a simulation model from a previous study to incorporate the logic for the current analysis. The analysis has been conducted with realistic customer demand inputs and a design of experiments consisting of 195 scenarios. The results show that with our improved pricing strategy, it is possible to increase revenues by more than 20 percent.
Emerging data that track the dynamics of large populations bring new potential for understanding human decision-making in a complex world and supporting better decision-making through the integration of continued partial observations about dynamics. However, existing models have difficulty with capturing the complex, diverse, evolving, and partially unknown dynamics in social networks, and with inferring system state from isolated observations about a tiny fraction of the individuals in the system. To solve real-world problems with a huge number of agents and system states and complicated agent interactions, we propose a stochastic kinetic model that captures complex decision-making and system dynamics using atomic events that are individually simple but together exhibit complex behaviors. As an example, we show how this model offers significantly better results for city-scale multi-objective driver route planning in significantly less time than models based on deep neural networks or co-evolution.
Simulation-based Optimization Tool for Field Service Planning
Gabriel G. Castañé, Helmut Simonis, and Kenneth N. Brown (University College Cork); Yiqing Lin (United Technologies Research Center); Cemalettin Ozturk and Michele Garraffa (United Technologies Research Center, Ireland); and Mark Antunes (United Technologies Corporation)
Many companies that deliver units to customer premises need to provide periodical maintenance and services on request by their field service technicians. A common challenge is to evaluate different design choices, related to staffing decisions, technician scheduling strategies, and technological improvements in order to make the system more efficient. This work provides a simulation-based optimization tool to support decision makers in tackling this challenging problem. The proposed framework relies on an optimization engine for the generation of the daily plans. A simulation component is used to evaluate the applicability of such plans by taking into account the stochastic factors. Furthermore, an interface manages the communication between these two components and allows a feedback loop between the simulator and the optimizer to achieve more robust plans. The applicability of the framework is demonstrated through a business case to evaluate different staffing decisions.
Evaluating Mixed Integer Programming Models for Solving Stochastic Inventory Problems
Bas Bluemink (Eindhoven University of Technology); Ton de Kok (Eindhoven University of Technology, Atlas 4.401); and Balan Srinivasan and Reha Uzsoy (North Carolina State University)
We formulate mixed integer programming (MIP) models to obtain approximate solutions to finite horizon stochastic inventory models. These deterministic formulations of necessity make a number of simplifying assumptions, but their special structure permits very short model solution times under a range of experimental conditions. We evaluate the performance of these models using simulation optimization to estimate the true optimal solutions. Computational experiments identify several demand and cost scenarios in which the MIP models yield near-optimal solutions, and other cases where they fail, suggesting directions for future research.
This paper presents preliminary results of an agent-based simulation study aimed at establishing whether a fleet of electric vans with different charging options can match the performance of a diesel fleet. We describe a base model imitating the operations of a real-world retailer using agents. We then introduce electric vehicles and charging hubs into our model. We evaluate how the use of electric vehicles, charging power and charging hubs influence the retailer’s operations. Our simulation experiment suggests that, though they are useful, technological interventions alone are not sufficient to match the performance of a diesel fleet. Hence, reorganization of the urban delivery system is required in order to reduce carbon emissions significantly.
The Electric Vehicle Routing Problem with Time Windows and Stochastic Waiting Times at Recharging Stations is an extension of the Electric Vehicle Routing Problem with Time Windows where the vehicles may wait in the queue before recharging their battery due to a limited number of available chargers. Long waiting times at the stations may cause delays for both the customers and the depot. In this study, we model the waiting times using M/M/1 queueing system equations. We solve small instances by CPLEX assuming expected waiting times at the stations and calculate the reliability of these solutions by simulating the waiting times. We observe that, while chargers become busier, the reliability of the solutions obtained with average times decreases.
Today, with the help of low-cost and clean electricity, electric vehicles (EVs) show great potential for future urban transportation with little greenhouse gas emissions and air pollution. However, the use of EVs is still in its infancy with limited numbers of users worldwide. Public charging infrastructures can be one of the drivers to enhance the prevalence of EVs. In this paper, we aim to develop a framework to establish a reliable urban public charging infrastructure. We first determine the optimal locations and size of those stations by a robust optimization model incorporating uncertain traffic flows and existing power grid networks; and then we use a discrete event simulation approach to model more realistic charging demands. We further improve these optimal solutions and verify the EV charging infrastructure considering both power and transportation networks. We validate our models with real traffic and power grid data from a Southeast Chinese municipality.
Capacity and workforce management in a distribution center can have significant impacts on the overall supply chain. This paper examines the effects of workforce staffing strategies employed in the warehouse operations of a beverage distribution center located in the Bio-Bio Region, Chile. The workforce is responsible for unloading and storing inbound product shipments from distant production plants, as well as retrieving and preparing outbound product shipments for local delivery. A simulation model was used to guide how to improve warehouse operations as measured by load preparation time, workforce staffing costs, and maximum storage capacity utilization. The results recommend increasing warehouse pallet storage capacity to improve efficiency. Additionally, scenarios were evaluated concerning the firm’s willingness-to-pay for improvements related to workforce staffing and training. The results indicate that investing in the workforce will reduce the firm's load preparation time by as much as 15%.
Retailers have been increasing standards for suppliers in regard to demand-fulfillment performance. To discourage partial deliveries due to unacceptable, defective products, retailers can unilaterally back-charge the supplier a penalty or collaboratively improve the supplier's production process yield and reduce the supplier's quality investment expenditure. In this study, we create an analytical model that compares the retailer's penalty-enforcement and cost-reduction approaches in which the supplier must optimize its production process yield to minimize the total expected cost. The findings indicate that using either approach can induce the supplier to improve process yield. The simulation results show that the cost-reduction approach may require less effort than the penalty-enforcement approach to attain the same level of quality improvement.
An Agent-based Modelling Framework for Urban Agriculture
Adam Ghandar, Georgios Theodoropoulos, Miner Zhong, Bowen Zhen, Shijie Chen, and Yue Gong (Southern University of Science and Technology) and Ayyaz Ahmed (University of Engineering and Technology)
Agricultural innovation is imperative in order to meet global challenges to sustainably feed large urban populations. This paper contributes a modelling framework for urban agriculture, and an implementation in a scenario based on the fast growing mega city of Shenzhen located near Hong Kong in southern China. We also review related work and provide a gap analysis between requirements for modelling modern urban agricultural systems versus related work that looks at agricultural supply chains, production, and land use. The proposed framework will facilitate developing a novel decision support system to coordinate decentralized urban agricultural production units in order to realize, at scale, numerous benefits from co-locating production and consumption in the urban environment.
The crops of wheat, maize, and rice make up almost two-thirds of the world’s dietary needs. Additionally, soybeans account for three-quarters of global livestock feed. Considering over half of the world's exported supply of these commodities is exported via maritime means, the free flow of marine traffic becomes paramount. Current optimization models found in a number of studies involving food commodities lack the ability to capture the inherent variance in maritime transport. To capture this variance, a discrete-event simulation was built to understand how disruptions in this system impact those who rely on its unhindered functionality. Monthly export data are used, and the maritime chokepoints of the Panama Canal, the Suez Canal, and the Strait of Gibraltar are modeled for disruption. Results indicate significant food shortages for all importers studied. Marine traffic through the Strait of Malacca was also significantly impacted when any of the three chokepoints studied were closed.
Simulation and Optimization of Operations for Offshore Installations Planning using a Model Predictive Control Scheme
Daniel Rippel, Nicolas Jathe, and Michael Lütjen (BIBA - Bremer Institut für Produktion und Logistik GmbH at the University of Bremen); Helena Szczerbicka (Leibniz University of Hannover); and Michael Freitag (University of Bremen)
Wind energy is a promising technology to cover the world's need for sustainable energy production. Nevertheless, the construction of offshore wind farms imposes particular challenges. While about 15% to 20% of the costs for offshore wind farms can be attributed to logistics during the construction, highly dynamic weather conditions render accurate planning difficult. This article presents an approach for the installation planning, which uses the model predictive control scheme to combine short-term control with model-based simulations. The latter is used to optimize over mid- to long-term plans. Besides, the article presents a new approach to incorporate the uncertainties of weather predictions into the operations planning by estimating the expected duration of offshore operations. Results show increased efficiency of generated plans with growing planning horizons, limited by the accuracy of weather forecasts.
Brazil is one of the world’s leading iron ore exporters, producing millions of tons annually. To transport ore, the country has specialized port terminals and logistical infrastructure designed specifically for export. The improvement of such systems remains a challenge and attracts substantial investment, generating a great synergy between companies and universities in the search for solutions. In this context, we present a case study involving a Brazilian terminal designed to handle iron ore, which must now import coal as well. The objective, therefore, is to analyze the effects of this new operation on the system’s performance indicators. To evaluate and compare different scenarios, a discrete-event computational simulation model was built using the software Simul8. The results show that the new operation is viable, maintaining an 89% service level when handling three millions of tons per year of coal.
This study was designed to evaluate innovative last-mile delivery concepts involving autonomous parcel robots with simulation and optimization. In the proposed concept, the last mile of parcel delivery is split into a two-tiered system, where parcels are first transported to a transfer point by conventional trucks and then delivered with parcel robots on customer demand.
The purpose of this publication is to compare different time slot selection options for customers, namely due window and on-demand selection, in the context of city logistics measures such as access regulations and driving bans for city centers. An agent-based simulation model is used, including a Geographic Information System environment and optimization algorithms for allocation and scheduling of delivery robots. The concept is tested in a comprehensive case study located in the city center of Cologne (Germany) based on real data from the parcel delivery company Hermes.
The ever-increasing demand for fresh and healthy products initiated an urgency for effective planning for Agri-Fresh Produce Supply Chains (AFPSC). However, AFPSC faces many challenges, including product vulnerability to market disruption and limited shelf-life. In case of a no-deal Brexit (i.e., the UK leaving the EU without an agreement), trade between Ireland and the UK will most probably be subjected to customs control. In effect, transportation delays and products deterioration rates will increase. Based on interviews with an Irish AFPSC forwarder, a simulation model was developed to investigate different systems' dynamics and operating rules under different delay patterns on the (yet non-existent) inner-Irish border. A cost analysis of varying border regimes favours a more thorough change in operations, e.g., route adjustments. This paper gives a first indication on how AFPSC forwarders in Ireland can deal with a no-deal Brexit situation.
Several factors influence traffic congestion and overall traffic dynamics. Simulation modelling has been utilized to understand the traffic performance parameters during traffic congestions. This paper focuses on driver behavior of route selection by differentiating three distinguishable decisions, which are shortest distance routing, shortest time routing and less crowded road routing. This research generated 864 different scenarios to capture various traffic dynamics under collective driving behavior of route selection. Factors such as vehicle arrival rate, behaviors at system boundary and traffic light phasing were considered. The simulation results revealed that shortest time routing scenario offered the best solution considering all forms of interactions among the factors. Overall, this routing behavior reduces traffic wait time and total time (by 69.5% and 65.72%) compared to shortest distance routing.
This paper examines the modeling of disruption events within a bulk petroleum supply chain through the use of an object-oriented simulation library. We describe how the library models disruption events and their effects on supply chain operations. This includes how to specify disruptions, mitigation strategies and metrics that can be used to assess the impact of the disruption. The modeling is illustrated via an analysis of a supply chain involving DLA Energy’s bulk petroleum supply chain as impacted by a category 4 hurricane scenario. The effectiveness of pre-positioning of inventory within the supply chain is illustrated.
Export customers requesting empty containers in the hinterland areas are serviced by maintaining sufficient inventory at each regional depot. The supply-demand imbalance at the regional level is stabilized by repositioning empty containers between inland depots. We propose an inter-depot empty container repositioning problem and a heuristic real-time decision algorithm to solve it. Initially, single-period travel time is considered and three models: Allocation Problem (AP), Value Approximation Model (SPL-VA), and Node Decomposition Heuristic (NDH-SP) are presented. The system is simulated over a certain time horizon by generating real-time supply and demand values, and the system’s evolution is studied under each of the proposed models. The VA models perform better than the AP with modest computational effort. The NDH-SP is further generalized to accommodate multi-period travel times. By simulating this algorithm with a demand rejection policy, we observe that maximum demand satisfaction is obtained by allowing medium-sized demand queues at the depots.
Analyzing the Influence of Costs and Delays on Mode Choice in Intermodal Transportation by Combining Sample Average Approximation and Discrete Event Simulation
Ralf Elbert and Jan Philipp Müller (Technische Universität Darmstadt)
Besides transportation costs the punctual delivery of the goods is a key factor for mode choice in intermodal transportation networks. However, only a limited number of studies have included stochastic transportation times in Service Network Design, which refers to decisions regarding transportation mode and services, so far. The paper on hand combines a Sample Average Approximation approach with Discrete Event Simulation for Service Network Design with stochastic transportation times, including the corresponding vehicle routing problem for road vehicles. The share of orders transported by intermodal road-rail vs. unimodal road transportation in dependence of costs and delays of the trains is evaluated for a generic transportation relation in Central Europe, backed by empirical data for transportation orders and delay distributions. The results show a strong effect of rail main haulage costs, whereas even for higher train delays road-rail transportation can still remain competitive.
Chair: Reha Uzsoy (North Carolina State University)
Considering Energy-related Factors in the Simulation of Logistics Systems
Moritz Poeting (TU Dortmund University), Bastian Prell (Fraunhofer Institute for Machine Tools and Forming Technology), Markus Rabe (TU Dortmund University), Tobias Uhlig (Universität der Bundeswehr), and Sigrid Wenzel (University of Kassel)
Traditionally, aspects such as emissions and energy consumption have to be taken into account for environmental and economic reasons when it comes to transport. In other areas of logistics, such as production logistics and intralogistics, the energy aspect is also becoming increasingly important. Existing literature has been recently reviewed in a contribution of the Arbeitsgemeinschaft Simulation (ASIM) to the Winter Simulation Conference 2018 (Uhlig et al. 2018) to develop a map of common approaches and best practices for manufacturing and logistics systems. In the paper presented here, as a complement we are focusing on the application of energy simulation in logistics to give a comprehensive overview and present exemplary case studies. Furthermore, we show a classification of approaches to combine energy aspects with simulation. Finally, we will discuss open questions and future trends in this field of research.
The newsvendor problem is one of the classic models in inventory management. Although the optimal order quantity can be calculated, experiments reveal that decision makers often do not select the optimal quantity. These decision biases have been well studied, but analysis of the financial risk associated with these suboptimal decisions is limited. This paper examines the impact on profit by comparing the expected profit of suboptimal decisions with that of optimal decisions. We first conduct a literature review of behavioral results in the newsvendor setting. We use the results reported in the literature to determine parameters in a model for behavioral newsvendor decision making. We then build a Monte Carlo simulation that incorporates the behavioral decision making model, heterogeneity of decision makers, and parameters of the newsvendor decision to calculate the expected profit loss. The simulation results shed light on the financial risk associated with these inventory decisions.
In this paper, we simulate allocation policies for a two-stage inventory system that receives perfect advance demand information (ADI) from customers belonging to different demand classes. Demands for each customer class are generated by independent Poisson processes while the processing times are deterministic. All customers in the same class have the same demand lead time (the difference between the due date and the requested date) and back-ordering costs. Each stage in the inventory system follows order-base-stock-policies where the replenishment order is issued upon arrival of a customer order. The problem requires a fast and reliable method that determines the system performance under different policies and ADI. Thus, we employ discrete event simulation to obtain output parameters such as inventory costs, fill rates, waiting time, and order allocation times. A numerical analysis is conducted to identify a reasonable policy to use in this type of system.
Simheuristics for Logistics, SCM and Transportation
Chair: Javier Faulin (Public University of Navarre)
A Simheuristic for the Unmanned Aerial Vehicle Surveillance-Routing Problem with Stochastic Travel Times and Reliability Considerations
Javier Panadero and Angel A. Juan (Universitat Oberta de Catalunya), Carles Serrat and Manel Grifoll (Universitat Politècnica de Catalunya-BarcelonaTECH), Mohammad Dehghanimohamamdabadi (Northeastern University), and Alfons Freixes (Euncet Business School)
In the unmanned aerial vehicle (UAV) surveillance-routing problem, a limited fleet of UAVs with driving-range limitations have to visit a series of target zones in order to complete a monitoring operation. This operation typically involves taking images and / or registering some key performance indicators. Whenever this surveillance action is repetitive, a natural goal to achieve is to complete each cycle of visits as fast as possible, so that the number of times each target zone is visited during a time interval is maximized. Since many factors might influence travel times, they are modeled as random variables. Reliability issues are also considered, since random travel times might cause that a route cannot be successfully completed before the UAV runs out of battery. In order to solve this stochastic optimization problem, a simheuristic algorithm is proposed. Computational experiments contribute to illustrate these concepts and to test the quality of our approach.
Combining the Internet of Things with Simulation-based Optimization to Enhance Logistics in an Agri-Food Supply Chain
David Raba, Angel A. Juan, and Javier Panadero (Open University of Catalonia); Alejandro Estrada-Moreno (Universitat Rovira i Virgili); and Christopher Bayliss (Open University of Catalonia)
This paper discusses how the Internet of Things and simulation-based optimization methods can be effectively combined to enhance refilling strategies in an animal feed supply chain. Motivated by a real-life case study, the paper analyses a multi-period inventory routing problem with stochastic demands. After describing the problem and reviewing the related literature, a simulation-based optimization approach is introduced and tested via a series of computational experiments. Our approach combines biased-randomization techniques with a simheuristic framework to make use of data provided by smart sensor devices located at the top of each farm silo. From the analysis of results, some managerial insights are also derived and a new business model is proposed.
Simulation-based Optimization in Transportation and Logistics: Comparing Sample Average Approximation with Simheuristics
Angel A. Juan (Open University of Catalonia, Euncet Business School); Lorena Reyes-Rubiano (Public University of Navarre, University of La Sabana); Rocio de la Torre (Public University of Navarre); Javier Panadero (Open University of Catalonia, Euncet Business School); and Javier Faulin and Juan Ignacio Latorre (Public University of Navarre)
Simulation-based optimization (SBO) refers to a series of simulation-optimization methods that are employed to solve complex optimization problems with stochastic components. This paper reviews some recent applications of SBO in the area of Transportation and Logistics. This paper presents the stochastic variants for the team orienteering problem to show the application of the SBO solving methods. Similarly, we make a comparison of two of the most popular ones: the sample average approximation method and the simheuristic algorithms. Finally, the paper concludes by considering the value of the aforementioned approach and outlining further research needs.
Inventory management models help determine policies for managing trade-offs between customer satisfaction and service cost. Initiatives like lean manufacturing, pooling, and postponement have been proven to be effective in mitigating the trade-offs by maintaining high levels of service while reducing system inventories. However, such initiatives reduce the buffers, exacerbating supply chain issues in the event of a disruption. We evaluate stocking decisions in the presence of operational disruptions caused by random events such as natural disasters or man-made disturbances. These disruptions represent different risks from those associated with demand uncertainties as they stop production flow and typically persist longer. Thus, operational disruptions can be much more devastating though their likelihood of occurrence may be low. Using stochastic simulation, we combine the newsvendor model capturing demand uncertainty costs with catastrophe models capturing disruption/recovery costs. We apply data analytics to the simulation outputs to obtain insights to manage inventory under disruption risk.
We analyze the optimal production and inventory assortment decisions of a seed manufacturer facing increased yield variability triggered by extreme weather conditions in addition to long supply lead times as well as supply and demand uncertainty. We also investigate the limits of operational flexibility in the form of postponement using simulation models that are calibrated through field data. Our analysis shows that a minor increase in future yield variability leads to a large increase in the optimal seed production quantities. Such a rise would not only significantly increase the seed manufacturer's working capital requirements, but may also make its current supply capacity inadequate to fulfill its optimal production plans. We also show that the value of postponement decreases with higher yield variability, which also renders low-margin seeds more susceptible to this volatility.
Since the Oil and Gas industry crisis in late 2014, oil companies are increasing their focus on optimizing operations and reducing cost on upstream logistics. However, this effort must be followed by the concern of maintaining satisfactory logistics service to guarantee the continuity of the maritime units operations. The objective of this paper is to propose a different supply strategy that reduces the risk of diesel shortages in maritime units at Campos Basin, Brazil. The impacts of the new strategy on service level and cost are also taken into account. A discrete-event stochastic simulation model has been developed to consider the uncertainty components involved at the upstream logistics processes. Results show that the supply strategy designed enables the reduction of the risk of diesel unavailability, decreases the demand’s lead time and requires fewer supply vessels.
In the field of supply chain simulation, transport relations are often modeled as transport times using distributions. Considering long-distance transport relations, this is usually a suitable approach. But, for short-distance transports within large cities, delays depend on specific roads and the time of day. Some simulation tools offer geographical data for modeling actual roads. However, in order to model time-dependent transport times, additional data are needed. In this paper, we present an approach to tackle this problem. Road networks are derived from OpenStreetMap data (including traffic signals). In order to obtain the average speed of vehicles on an hourly basis, we conduct pre-simulation runs modeling the entire inner-city traffic. The respective vehicle rides are derived from trajectory data of cell phone users, where the assignment of users to cell phone tower sections is given for each hour of the day. First results for the city of Winnipeg are presented.
Today, an increasing number of vehicles use IoT devices to communicate with a control center to obtain such traffic information as road congestion conditions and the current shortest route. We analyze the enormous amount of data obtained from these vehicles and detect jam tails even if the percentage of vehicles with IoT devices is small. For effective performance and improved accuracy when analyzing an enormous amount of data for a wide road area, we use a multi-agent system to collect and analyze the IoT data, which is stored in memory with a hierarchical structure organized by vehicle agents and road agents. This structure enables time series data to be analyzed from the viewpoint of each vehicle and to be aggregated for jam analysis from the viewpoint of each road. Furthermore, we use a large-scale traffic simulator to evaluate the behavior of this IoT agent system.
An Inventory-Routing Problem with Stochastic Demand and Stock-out: A Solution and Risk Analysis using Simheuristics
Bhakti Stephan Onggo (University of Southampton), Angel A. Juan and Javier Panadero (Universitat Oberta de Catalunya), Canan G. Corlu (Boston University), and Alba Agustin (Public University of Navarre)
Supply chain operations have become more complex. Hence, in order to optimise supply chain operations, we often need to simplify the optimisation problem in such a way that it can be solved efficiently using either exact methods or metaheuristics. One common simplification is to assume all model inputs are deterministic. However, for some management decisions, considering the uncertainty in model inputs (e.g., demands, travel times, processing times) is essential. Otherwise, the results may be misleading and might lead to an incorrect decision. This paper considers an example of a complex supply chain operation that can be viewed as an Inventory-Routing Problem with stochastic demands. We demonstrate how a simheuristic framework can be employed to solve the problem. Further, we illustrate the risks of not considering input uncertainty. The results show that simheuristics can produce a good result, and ignoring the uncertainty in the model input may lead to sub-optimal results.
Stochastic Simulation Model Development for Biopharmaceutical Production Process Risk Analysis and Stability Control
Bo Wang and Wei Xie (Northeastern University), Tugce Martagan and Alp Akcay (Eindhoven University of Technology), and Canan Gunes Corlu (Boston University)
In this paper, we develop a stochastic simulation model for biomanufacturing risk analysis by focusing on the production process from raw materials to finished drug substance. By exploring biotechnology domain knowledge, we model how the properties or attributes of each batch dynamically evolve along the production process. We consider main sources of uncertainty leading to batch-to-batch variation, such as raw material biomass, cell culture, and target protein purification. The proposed simulation model allows us to incorporate the underlying physical chemical interactions and also the no-wait constraint in the purification process. It can be used to facilitate biomanufacturing risk management and guide coherent operational decision making (i.e., production scheduling and quality control) so that the stability of bio-drug quality can be improved while efficiently utilizing the resources and speeding up the time to market.
Klein Mechanisch Werkplaats Eindhoven (KMWE) is a precision manufacturing company situated in the Netherlands and recently relocated to a new location known as the 'Brainport Industries Campus' (BIC). This move allowed KMWE to improve the performance of its manufacturing facility known as the 'Tool Service Center' (TSC) by investing in vertical automated storage and retrieval systems (AS/RSs). However, these decisions needed to be made under input uncertainties since the move to BIC and modernization of existing equipment would cause changes in operating parameters inside the facility, over which little information was known in advance. In this study, we show how hybrid simulation modelling was used to assess the impact of input uncertainties (such as operator productivity, vertical storage height) on the throughput performance of TSC. Ultimately, the outcomes of this research project were used by KMWE to make an investment decision on new equipment acquisition quantity.
Track Coordinator - Manufacturing Applications: Christoph Laroque (University of Applied Sciences Zwickau), Loo Hay Lee (National University of Singapore), Guodong Shao (National Institute of Standards and Technology)
Manufacturing Applications
Smart Manufacturing
Chair: Maja Bärring (Chalmers University of Technology)
Transparency and Training in Manufacturing and Logistics Processes in Times of Industry 4.0 for SMEs
Henning Strubelt and Sebastian Trojahn (Otto von Guericke University Magdeburg) and Sebastian Lang (Fraunhofer Institute for Factory Operation and Automation IFF)
Based on our experience from multiple projects in the areas of production planning, scheduling, and delivery reliability in collaboration with small and medium enterprises (SMEs), we know that employees continue to change the planned manufacturing sequence at their workstations. Thus the expected optimum of the production planning cannot be reached.
We will discuss the influence of different employee behavior patterns on the overall production output. We have developed a planning tool for optimization of production planning in manufacturing SMEs by means of simulation. This planning tool can not only support planners and machine operators, but can also be used as a training tool for employees in addition to improving the production sequence. It uses behavioral patterns to increase learning effects and generate higher system understanding on the side of the employees. In this study we use a simulation model to prove the negative effects of specific behavioral patterns.
Digital models for planning and control of production systems are a key asset for manufacturers to gain or maintain their leadership. However, these models are often based on frameworks that do not take into account the real-time dimension, hence it is arduous to exploit them for taking short-term decisions over complex systems. The goal of this work is to prove the applicability of a Real-Time Simulation (RTS) framework that prescribes to exchange current-status data from a manufacturing system and to run alternative simulation models to decide the next moves. For this scope, we exploit a LEGO® Manufacturing System (LMS) together with a discrete event simulator that plays the role of its digital model. The results of this proof of concept show that the proposed framework can effectively be used to find better production rules for a manufacturing system in real-time manner.
Infrastructure for Model Based Analytics for Manufacturing
Sanjay Jain (The George Washington University), Anantha Narayanan (University of Maryland), and Yung-Tsun Tina Lee (National Institute of Standards and Technology)
Multi-resolution simulation models of manufacturing system, such as the virtual factory, coupled with analytics offer exciting opportunities to manufacturers to exploit the increasing availability of data from their corresponding real factory at different hierarchical levels. A virtual factory model can be maintained as a live representation of the real factory and used to highly accelerate learning from data using analytics applications. These applications may range from machine level to manufacturing management level. While large corporations are already embarking on model based analytics initiatives, small and medium enterprises (SMEs) may find it challenging to set up a virtual factory model and analytics applications due to barriers of expertise and investments in hardware and software. This paper proposes a shared infrastructure for virtual factory model based analytics that can be employed by SMEs. A demonstration prototype of the proposed shared infrastructure is presented.
During recent years, Autonomous Vehicle Storage and Retrieval Systems (AVS/RS) have been widely applied to meet the increasing demand for rapid and flexible large-scale storage and retrieval tasks. This paper focuses on the control strategies for coordinating the subsystem operations with regard to the conveyor system, rack storage system and pick-up system in order to maximize the system’s throughput capacity and minimize the storage/retrieval times of items in an e-commerce picking warehouse. The study is based on a large-scale shoe manufacturer’s warehouse with an eight-zone AVS/RS. We describe a simulation model that was built to validate the proposed control strategies and thus provides insights for system management.
Data from real-time indoor localization systems (RTILS) based on ultra-wideband (UWB) technology provide spatio-temporal information on the material flows of production orders on the shop floor. This paper investigates how historical position data can be used for the determination of lead times and respective time shares. We propose three different approaches for the enrichment of spatio-temporal trajectories with process information. Two of them are online algorithms for the automated posting of process times using either points or areas of interest. The third is an offline classification problem that minimizes the error that occurs during the assignment of measurements to processes when generating semantic trajectories. Furthermore, a sensor fusion concept is presented, which is necessary to split up the lead times of the operations in smaller time shares for simulation input modeling.
Utilizing Discrete Event Simulation to Support Conceptual Development of Production Systems
Hannes Andreasson, John Weman, Daniel Nåfors, Jonatan Berglund, and Björn Johansson (Chalmers University of Technology); Karl Lihnell (PowerCell Sweden AB); and Thomas Lydhig (Semcon AB)
Discrete Event Simulation (DES) is a well-reputed tool for analyzing production systems. However, the development of new production system concepts introduces challenges from uncertainties, frequent concept changes, and limited input data. This paper investigates how DES should be applied in this context and proposes an adapted simulation project methodology that sets out to deal with the identified challenges. Key adaptations include parallel and iterative methodology steps, and close involvement of the simulation team in the development of the new concept. The proposed methodology has been applied and evaluated in an industrial case study during the development of a new production system concept. The findings show that the methodology can reduce the impact of the identified challenges and provide valuable feedback which contributes to the development of both the simulation model and the production system concept. Further, an evaluation of investments in new technology can be better facilitated.
Chair: Guodong Shao (National Institute of Standards and Technology)
Digital Twin for Smart Manufacturing: the Simulation Aspect
Guodong Shao (National Institute of Standards and Technolog), Sanjay Jain (George Washington University), Christoph Laroque (University of Applied Sciences Zwickau), Loo Hay Lee (National University of Singapore), Peter Lendermann (D-SIMLAB Technologies Pte. Ltd.), and Oliver Rose (Universität der Bundeswehr München)
The purpose of this panel is to discuss the state of the art in digital twin for manufacturing research and practice from the perspective of the simulation community. The panelists come from the US, Europe, and Asia representing academia, industry, and government. This paper begins with a short introduction to digital twins and then each panelist provides preliminary thoughts on concept, definitions, challenges, implementations, relevant standard activities, and future directions. Two panelists also report their digital twin projects and lessons learned. The panelists may have different viewpoints and may not totally agree with each other on some of the arguments, but the intention of the panel is not to unify researchers’ thinking, but to list the research questions, initiate a deeper discussion, and try to help researchers in the simulation community with their future study topics on digital twins for manufacturing.
Real-time dispatching in job shops can be interpreted as a sequential decision-making process in which decisions are made in ordinal stages. In this paper, first we use a branching tree with a time axis to depict the problem, and then propose a nested-simulation-based approach to solve the problem. When a dispatching decision is requested by the manufacturing environment, the simulation will be used to predict the possible future situation and to evaluate all job alternatives. During each alternative simulation process, once a new decision point emerges, the simulation approach will be utilized again, which is called nested alternative simulation in which a simple base rule is adopted for making decisions instead of the simulation. The proposed approach has been superior to some simple rules with respect to the average cycle time and total weighted tardiness.
An Efficient Multi-Objective Hybrid Simheuristic Approach For Advanced Rolling Horizon Production Planning
Felix Kamhuber and Thomas Sobottka (Fraunhofer Austria Research GmbH); Bernhard Heinzl (Vienna University of Technology/Institute of Computer Engineering); and Wilfried Sihn (Fraunhofer Austria Research GmbH, Vienna University of Technology/Institute of Management Science)
This contribution introduces an innovative holistic multi-objective simheuristic approach for advanced production planning on rolling horizon basis for an European industrial food manufacturer. The optimization combines an efficient heuristic mixed-integer optimization, followed by a customized Simulated Annealing algorithm. State-of-the-Art multi-objective solution techniques fail to address highly fluctuating demands in a suitable way. Due to the lack of modelling details, as well as dynamic constraints, these methods are unable to adapt to seasonal (off-) peaks in demand and to consider resource adjustments. Our approach features dynamic capacity and stock-level restrictions, which are evaluated by an integrated simulation module, as well as a statistical explorative data analysis. In addition to a smoothed production, mid-term stock levels, setup-costs and the expected utilization of downstream equipment are optimized simultaneously. The results show a ~ 30 to 40% reduced output variation rate, thus yielding an equally reduced requirement for downstream equipment.
Simulation Based Forecast Data Generation and Evalutation of Forecast Error Measures
Sarah Zeiml and Klaus Altendorfer (University of Applied Sciences Upper Austria); Thomas Felberbauer (St. Pölten University of Applied Sciences); and Jamilya Nurgazina (St. Pölten University of Applied Sciences, FH St. Pölten)
Production planning is usually performed based on customer orders or demand forecasts. The demand forecasts in production systems can either be generated by manufacturing companies themselves, i.e. forecast prediction, or they can be provided by customers. For both alternatives, forecast prediction, as well as the customer-provided forecasts, the quality of those forecasts is critical for success. In this paper, a simulation model to generate forecast data that mimic different forecast behaviors is presented. In detail, an independent forecast distribution and a forecast evolution model are investigated to discuss the value of customer-provided forecasts in comparison to the simple moving average forecast prediction method. Main findings of the paper are that Root-Mean-Square-Error and Mean-Absolute-Percentage-Error describe the forecast error well if no systematic effects are present and Mean-Percentage-Error provides a good measure for systematic effects. Furthermore, systematic effects like overbooking are significantly reducing the value of customer-provided forecast information.
Poultry plants have to operate under challenging deadlines due to recent developments in the industry. They have expressed the need for improved control over their batching process to deal with these changes. In particular, they wish to achieve a target throughput to ensure deadlines are made, while minimizing the giveaway. This paper proposes an algorithm that is capable of controlling such a poultry product batcher, and aims to achieve a target throughput and minimize the giveaway. Using a simulation study we find that both objectives can be achieved under a wide range of settings using the proposed algorithm.
The semi-online bin covering problem is a NP-hard problem that occurs in a batching processes in a high-end poultry processing line. The objective is to form batches of items with minimal giveaway, which is the difference between the target and realized batch weight. The items in this process are allocated in the order of arrival, and the weight of the first set of items is assumed to be known. We develop a novel hybrid genetic algorithm, combining a genetic algorithm and several local search methods. Simulation experiments based on real-world data are performed to gain managerial insights. These simulations suggest that the proposed algorithm produces high quality solutions within a reasonable time limit.
Application of the Non-Linear Optimization Algorithm Differential Evolution for Optimization of Manufacturing Systems Regarding Their Energy Efficiency
Matthias Meißner (TU Dortmund), Johanna Myrzik (University of Bremen), and Christian Rehtanz (TU Dortmund)
Due to the increasing importance of energy efficiency in manufacturing systems, this work analyzes the use of two optimization algorithms, which allow an optimization of production systems. These two algorithms are the brute force method and the Differential Evolution which is based on evolutionary strategy. The objective value for the optimization is the energy efficiency, which is described by an innovative indicator system. This is applicable to any hierarchical level of a production system and considers the interdependencies between the individual processes. An exemplary production system is modeled as an event discrete simulation to analyze the functionality of the optimization algorithms. The results show that the Differential Evolution is
able to calculate the results in less time compared to the brute force method. But the Differential Evolution is not able to identify the global optimum in any case. Its reliability depends on the parameterization of the three available control variables.
Material flow simulations are a powerful tool for planning and improving complex production systems. In practice, however, the potential of simulations cannot be exploited. Long development times and high efforts complicate a successful realization of simulation projects. In this paper we present a new approach to automate time-consuming steps in simulation projects. Based on general tracking and tracing data, our approach reduces the efforts in the area of data acquisition and automates the steps of model development, model parameterization and model implementation. The approach automatically identifies a material flow model, classifies and parametrizes the identified material flow elements and extracts basic control policies. A first prototypical implementation confirms the validity of the approach. The results show that the simulation models can be successfully created automatically with low manual effort.
The emergence and steady adoption of machine communication protocols like the MTConnect are steering the manufacturing sector towards greater machine interoperability, higher operational productivity, substantial cost savings with advanced decision-making capabilities at the shop-floor level. MTConnect GitHub repository and NIST Smart Manufacturing Systems (SMS) Test Bed are two major resources for collecting data from CNC machines. However, these tools would be insufficient and protractive in Modeling & Simulation (M&S) scenarios where spawning hundreds of MTConnect agents and thousands of adapters with real-time virtual machining is necessary for advancing research in the digital supply chain. This paper introduces a flexible simulator testbed of multiple MTConnect agents and adapters for simulating Levels 0 & 1 of the ISA-95 framework and help support R&D activities in complex multi-enterprise supply chain scenarios. To the best knowledge of the authors, there is no publicly accessible multi-enterprise MTConnect testbed yet.
Trends In Automatic Composition of Structures for Simulation Models in Production and Logistics
Sigrid Wenzel (University of Kassel, Institute of Production Technology and Logistics); Jakob Rehof (TU Dortmund University, Chair for Software Engineering); Jana Stolipin (University of Kassel, Institute of Production Technology and Logistics); and Jan Winkels (TU Dortmund University, Chair for Software Engineering)
This paper presents challenges in the field of automatic generation of simulation models in production and logistics. To this purpose, we first present existing approaches in the field, we then analyze them, and we present trends in the field of automatic assembly of structures for simulation models. Subsequently, possibilities from the field of software engineering are discussed, illustrating a possible approach to the composition of structures for simulation models. In particular, the focus is placed on synthesis of logical structures by means of combinatory logic. The strategy of how this logic can support the automated generation of structural variants in simulation models is shown on the basis of one of the areas of intralogistics. In our example, different possibilities of structural variants should be clarified and their potential for automated generation demonstrated.
Track Coordinator - MASM: Semiconductor Manufacturing: John Fowler (Arizona State University), Tae-Eog Lee (KAIST), Lars Moench (University of Hagen)
MASM: Semiconductor Manufacturing
Scheduling and Dispatching
Chair: John Fowler (Arizona State University)
A Sequential Search Framework for Selecting Weights of Dispatching Rules in Manufacturing Systems
Je-Hun Lee, Young Kim, and Yun Bae Kim (Sungkyunkwan University); Hyun-Jung Kim (SungKyunKwan University); and Byung-Hee Kim and Goo Hwan Chung (VMS Solutions Co.,Ltd.)
Dynamic manufacturing systems consisting of multiple stages use a combination of dispatching rules to obtain production schedules in general. A weighted sum method, which assigns different weights to each dispatching rule and prioritizes jobs with a high weighted sum, is widely used especially for LCD and semiconductor manufacturing. A suitable set of weights by considering dynamic system states has to be determined in order to improve the throughput and utilization of systems. Hence, we develop a sequential search framework, with simulation and decision trees, which can generate a good weight set of dispatching rules within a short period of time. We show that the proposed search method performs better than a random search by performing experiments with real fab data.
Work-in-process (WIP) oriented dispatching rules have been widely applied to balance workload for work-center in wafer fabs. The performances of WIP oriented rules, e.g., minimum inventory variability scheduling (MIVS) and WIP control table (WIPCT), highly rely on the accuracy of target WIP, as the target WIP plays a major role in measuring the pull request of work-center. In this paper, to replace the target WIP, we propose a workload indicator (WI), which utilizes global fab information like dynamic workload of work-center, batch size, setup requirement and lot status, to measure the pull request of work-center. Furthermore, the proposed WI is applied in a global fab dispatching scheme which considers K-machine look ahead and J-machine look back. Simulation results show its significant improvement versus the use of the target WIP oriented rules.
Clustered photography tools (CPTs) are very complex and can substantially influence the throughput of wafer fabrication facilities. Therefore, efficient lot scheduling for CPTs can directly improve fab performance. In this paper, we develop mixed integer linear programs for linear, affine, exit recursion, and flow line models of CPTs to optimize schedules with respect to mean cycle time, makespan, and tardiness. We simulate a true CPT using a flow line and solve the MILPs for other above mentioned, reduced models. Schedules from reduced models are then input into the flow line optimization model in order to evaluate the loss. Using numerical experiments, we show that exit recursion models outperform other models. Under time limits, exit recursion models exhibit at least 6% better performance than flow lines for large problems on cycle time.
Simulation Based Multi-objective Fab Scheduling by Using Reinforcement Learning
Won Jun Lee and Byung Hee Kim (VMS Solutions Co.,Ltd); Key Hoon Ko (VMS Global, Inc.); and Ha Yong Shin (Korea Advanced Institute of Science and Technology(KAIST))
Semiconductor manufacturing fab is one of the most sophisticated man-made system, consisting of hundreds of very expensive equipment connected by highly automated material handling system. Operation schedule has huge impact on the productivity of the fab. Obtaining efficient schedule for numerous equipment is a very complex problem, which cannot be solved by conventional optimization techniques. Hence, heuristic dispatching rules combined with fab simulation is often used for generating fab operation schedule. In this paper, we formulate the fab scheduling problem as a semi-Markov decision process and propose a reinforcement learning method used in conjunction with the fab simulator to obtain the (near-)optimal dispatching policy. Resulting schedule obtained by the proposed method shows better performance than heuristic rules whose parameters are tuned by human experts.
This paper studies a simultaneous scheduling of production and material transfer in a semiconductor manufacturing. The simultaneous scheduling approach has been recently adopted by warehouse operations, wherein transbots pick up jobs and deliver to pick-machines for processing that requires a simultaneous scheduling of jobs, transbots, and machines. However, both a large proportion of literature and real-world scheduling systems in semiconductor manufacturing consider only one side of the problem. We propose the most efficient solution for job shop scheduling problem (JSP) with transbots, significantly outperforming all other benchmark approaches in the literature.
This paper addresses the problem of controlling theWork-In-Process (WIP) in semiconductor manufacturing by using a global scheduling approach. Global fab scheduling steers scheduling decisions at work-center level by providing objective in terms of production targets, i.e. product quantities to complete for each operation and at each period on a scheduling horizon. A WIP balancing strategy is proposed to minimize the product mix variability in terms of throughput and cycle time. This strategy is enforced using a global scheduling optimization model which is formulated as a linear programming model. The global scheduling model is coupled with a generic multi-method simulation model for evaluation purpose. Computational results on industrial data show that the WIP balancing strategy provides a better control of the WIP in the system and helps to minimize product mix variability while maintaining high throughput.
We are interested in the influence of spare part service measures on the performance of front-end wafer fabrication process. This process is characterized by re-entrant flows exacerbating variability differences. We focus on the bottleneck resource. First, we simulate the spare part supply chain to show the impact of the spare part service measures on the time to repair distribution. Second, we use this distribution to assess the performance of the front-end wafer fabrication process. We conclude that the choice of the spare parts service measure has a high impact on the front-end wafer fabrication process performance. Our methodology could help practitioners making improved decisions regarding spare parts service measure.
While high levels of automation in modern manufacturing systems increase the reliability of production, tool failure and preventive maintenance (PM) events remain a significant source of production variability. It is well known for production systems, such as the M/G/1 queue, that optimal PM policies possess a threshold structure. Much less is known for networks of queues. Here we consider the prototypical tandem queue consisting of two exponential servers in series subject to health deterioration leading to failure and repair. We model the PM decision problem as a Markov decision process (MDP) with a discounted infinite-horizon cost. We conduct numerical studies to assess the structure of optimal policies. Simulation is used to assess the value of the optimal PM policy relative to the use of a PM policy derived by considering each queue in isolation. Our simulation studies demonstrate that the mean cycle time and discounted operating costs are 10% superior.
The adverse impact of new product introductions on the performance of semiconductor wafer fabrication facilities (fabs) is widely acknowledged. In this work, we develop a simulation model, of a simplified production system that captures the impact of a new product introduction on the total system throughout and the learning effects of production and engineering activities. We use a simulation optimization procedure utilizing a Genetic Algorithm (GA) to obtain near-optimal solutions for the releases of production and engineering lots that maximize total contribution over the planning horizon. Numerical experiments provide insights into the structure of optimal release policies and illustrate the improvements that can be achieved through the strategic use of engineering lots.
Chair: Stephane Dauzère-Pérès (École Nationale Supérieure des Mines de Saint-Étienne)
An Integration Of Static And Dynamic Capacity Planning For a Ramping Fab
Georg Seidel (Infineon Technologies Austria AG), Ching Foong Lee and Aik Ying Tang (Infineon Technologies (Kulim) Sdn Bhd), Soo Leen Low (D-SIMLABTechnologies Pte Ltd), Chew Wye Chan and Boon Ping Gan (D-SIMLAB Technologies Pte Ltd), Patrick Preuss (D-SIMLAB Technologies GmbH), Cevahir Canbolat (Infineon Technologies Regensburg GmbH), and Prakash Manogaran (Infineon Technologies (Kulim) Sdn Bhd)
In the semiconductor industry, production planning is often complicated due to constantly changing product mixes, reentrant process flows, and high variations of capacity uptime. In this paper we discuss the combination of static capacity and dynamic simulation approaches for production planning, highlighting how these approaches complement each other in our daily business process. The typical static capacity planning is based on a fixed product lead time, allocating the production volume of each product to available capacity with an objective of capacity minimization and a constraint of utilization limit (plan load limit) to absorb production variability. Whereas the dynamic simulation models the process of lots flowing through the production line, and consume the capacity at each process steps, with additional consideration of fab WIP at the beginning of simulation. With simulation we can additionally provide forecasts for important production key figures, for example product cycle times and fab flow factor.
Many production planning models applied in semiconductor manufacturing represent lead times as fixed exogenous parameters. However, in reality, lead times must be treated as realizations of released lots’ cycle times, which are in fact random variables. In this paper, we present a distributionally robust release planning model that allows planned lead time probability estimates to vary over a specified ambiguity set. We evaluate the performance of non-robust and robust approaches using a simulation model of a scaled-down wafer fabrication facility. We examine the effect of increasing uncertainty in the estimated lead time parameters on the objective function value and compare the worst-case, average optimality, and feasibility of the two approaches. The numerical results show that the average objective function value of the robust solutions are better than that of the nominal solution by a margin of almost 20% in the scenario with the highest uncertainty level.
Running engineering lots is crucial to stay competitive in the semiconductor market. But production and engineering lots compete for the same expensive equipment. Therefore, considering them in an integrated way is desirable. In this paper, we propose two production planning formulations based on linear programming (LP) for a simplified semiconductor supply chain. The first planning model is based on reduced capacity for production due to engineering lots, while the second model directly incorporates engineering activities. Additional capacity is considered in the latter model due to learning effects that represent process improvements. Both planning models are based on exogenous lead times that are an integer multiple of the planning period length. We show by means of a simulation study for a simplified semiconductor supply chain that the integrated formulation outperforms the conventional one in a rolling horizon setting with respect to profit.
In semiconductor manufacturing, before executing any operation on a product, a machine must be qualified, i.e., certified, to ensure quality and yield requirements. The qualifications of machines in a work-center are essential to the overall performance of the manufacturing facility. However, performing a qualification can be expensive and usually takes time, although the more qualified the machines, the more flexible the production system. Qualification management aims at determining the right qualifications at the lowest cost. We first discuss the limitations of a single period optimization model, in particular due to capacity losses and delays inherent to qualification procedures. Then, we motivate and briefly introduce a multi-period optimization model. Finally, we compare both optimization models in a computational study on industrial instances from a High Mix/Low Volume (HMLV) production facility with a high production variability.
We discuss qualification management problems arising in wafer fabs. Steppers need to be qualified to process lots of different families. A qualification time window is associated with each stepper and family. The time window can be reinitialized as needed and can be extended by on-time processing of lots from qualified families. Due to the NP-hardness of the qualification management problem, heuristic approaches are required to tackle large-sized problem instances arising in wafer fabs in a short amount of computing time. We propose a fast heuristics for this problem. The binary qualification decisions are made by heuristics while the real-valued quantities for each family and stepper are determined by linear programming. We conduct computational experiments based on randomly generated problem instances. The results demonstrate that the proposed heuristics are able to compute high-quality solutions using short computing times.
Wafer-to-order Allocation in Semiconductor Back-end Production
Patrick Christoph Deenen and Jelle Adan (Nexperia, Eindhoven University of Technology); Ivo Jean Baptiste François Adan and Alp Akcay (Eindhoven University of Technology); and Joep Stokkermans (Nexperia)
This paper discusses the development of an efficient algorithm that minimizes overproduction in the allocation of wafers to customer orders prior to assembly at a semiconductor production facility. This study is motivated by and tested at Nexperia’s assembly and test facilities, but its potential applications extend to many manufacturers in the semiconductor industry. Inspired by the classic bin covering problem, the wafer allocation problem is formulated as an integer linear program (ILP). A novel heuristic is proposed, referred to as the multi-start swap algorithm, which is compared to current practice, other existing heuristics and benchmarked with a commercial optimization solver. Experiments with real-world data sets show that the proposed solution method significantly outperforms current practice and other existing heuristics, and that the overall performance is generally close to optimal. Furthermore, some data processing steps and heuristics are presented to make the ILP applicable to real-world applications.
Infineon Technologies Dresden uses discrete event simulation to forecast key performance indicators. The simulation has also been used to perform experiments to improve production planning. It is important to reduce the efforts required for the creation and maintenance of the simulation models. Especially for the simulation studies, less detailed models can be utilized where components could be omitted. We considered a simplification of the process flows through operation substitution for constant delays. Different levels of model complexity were investigated. For each level, different tool sets were determined which were substituted for delays. First In First Out and Critical Ratio dispatching rules were used. Lot cycle time distributions were utilized in order to compare simplified models with a detailed model.
Applying Diffusion Models to Semiconductor Supply Chains: Increasing Demand by Funded Projects
Ines Gold (Philipps University Marburg), Hans Ehm and Thomas Ponsignon (Infineon Technologies AG), and Muhammad Tariq Afridi (Technical University of Munich)
The question of the paper is: “How does governmental funded projects influence the demand forecast for semiconductor manufacturing”. For Semiconductor Manufacturing an accurate demand is of paramount advantage due the challenges arising from a mix of high capital cost and short product life cycle. We consider the Bass Model for new technology and product introduction as the most sophisticated model for supporting demand forecast for new products and technologies. In this paper it is described how the boost from governmental funded project can be parameterized in the Bass Model from a theoretical and practical point of view. For the practical point of view the model is applied for the German funded project GeniusTex and Productive4.0, and for the increase in manufacturing demand of a pressure sensor. A perspective is given on how this model can be validated in future research once actual data are available.
In the growing globalization of production systems, the complexity of supply chains as socio-technical systems is escalating which, consequently, increases the importance of strong planning systems. Plans are developed to structure production in end-to-end supply chains that can experience nervousness due to uncertainties that results in unsatisfied customers. Although the external causes of nervousness and instabilities in supply chain planning systems were previously considered in the literature, internal nervousness of planning these complex networks can result from how the sub-components of the planning system interact. To study internal nervousness, a supply chain system of a semiconductor manufacturer is investigated as a case study. We examine internal nervousness of demand fulfillment by proposing a multi-paradigm simulation approach that combines discrete event and agent-based simulation. The results provide insight into the importance of visibility on the internal interactions of supply chain networks to reduce instability.
The Modeling and Analysis of Semiconductor Manufacturing (MASM) conference and related previous conferences have emerged from the Joint European Submicron Silicon Initiative (JESSI)/ SEMATECH collaboration project in the late nineties of the last century. The Measurement and Improvement of Manufacturing Capacity (MIMAC) data sets, which also include a simulation model of an Infineon facility, served for decades as reliable simulation reference models with substantial complexity to test academic solutions for research questions coming from academia itself and from the industry. More than a thousand of peer-reviewed publications speak an impressive language on the academic and industry coverage of MASM. Discrete-event and agent based simulation have moved from the tool and factory level to the entire semiconductor supply chain and sometimes even beyond covering the domains of value chains containing semiconductors. However, at the heart remained semiconductor manu¬facturing and semiconductor development.
With “Horizon 2020” the European Commission has set up the largest international collaborative research & innovation program. ECSEL, the Initiative on “Electronic Components and Systems for European Leadership” is the joint strategic European approach building on the JESSI experiences. Projects under the ECSEL umbrella contribute to reaching the next level for semiconductor development, manufacturing and supply chains. In this talk, some first results of trend-setting projects will be discussed. In the long term, it is very likely that new technologies such as Big Data, semantic web, AI with Deep Learning will change the way how we develop and manufacture semiconductors. The question is: how and who? MASM has to play a crucial role on the way to answer this question. Clearly, the semiconductor industry and its related B2B World containing semiconductors have the capacity to lead this change, thus unlocking the broader innovation potential towards the B2C World.
ECSEL Digital Reference - A Semantic Web for Semiconductor Manufacturing and Supply Chains Containing Semiconductors
Hans Ehm (Infineon Technologies AG); Nour Hany George Ramzy (Leibniz Hannover University, Infineon Technologies AG); Patrick Moder (Technical University of Munich, Infineon Technologies AG); Christoph Summerer (: Technical University of Munich, Infineon Technologies AG); Simone Fetz (Technical University of Munich, Infineon Technologies AG); and Cédric Neau (INSA Lyon, Infineon Technologies AG)
Within the public-private ECSEL Joint Undertaking, the project Productive4.0 aims for improvements with regard to digitalization for electronic components like semiconductors and systems as key enabling technologies. The semiconductor industry has a high growth potential within the frame of digitalization as manufactured products are intensively used in production and B2B environment. Thus, Supply chains employing and fabricating semiconductors become the core driver and beneficent of digitalization. In order to exploit digitalization potentials, Semantic Web technologies are used to create a digital twin for semiconductor supply chains and supply chains containing semiconductors. Exemplary benefits of this lingua franca which can be understood by humans and machines alike for various applications like simulation, deep learning, security, trust, and automation are shown in this paper.
As part of an effort to present to the semiconductor manufacturing community an updated wafer fab testbed, we provide the first of two simulation models, namely a High-Volume/Low-Mix (HV/LM) fab simulation model. The model is realistic in scale and level of complexity. A full description of the model features is provided, and its performance is studied based on an implementation using the AutoSched AP simulation tool. The simulation model is made publicly available online to allow researchers as well as practitioners to gain hands-on experience with it, and hopefully validate its features, or propose changes if needed. A final version of the testbed, including a low-volume/high-mix wafer fab simulation model will be presented within a year.
Semiconductor manufacturing is highly complex, invest intensive and constantly changing due to innovation. The semiconductor product market is highly volatile due to short product life cycles with difficult-to-predict ramps and end-of-life demands. These challenges are mitigated via flexible production capabilities, e.g. dynamic routing or rescheduling, used by planning systems to transfer volatile demands to well-utilized factories. The product structure is one of the keys for enabling the desired result. Product structure representations include linear, tree, and network. In this paper, definitions of several product structure representations are given and hypothesized benefits and drawbacks are discussed. Research questions are posed, current research efforts are introduced, and the hypothesis that a time dependent network-tree representation would be beneficial is postulated. The problem statement is explained by a real case merger where risk and opportunities based on the choice of product structure representation were relevant and no final solution was determined.
Track Coordinator - Military Applications: Nathaniel Bastian (Joint Artificial Intelligence Center, Department of Defense), Andrew Hall (United States Military Academy, Army Cyber Institute)
Cybersecurity, Military Applications
Simulation Applications in Cyber Analytics
Chair: Alex Kline (U.S. Army Office of the Deputy Chief of Staff G-8)
Solving the Army’s Cyber Workforce Planning Problem using Stochastic Optimization and Discrete-Event Simulation Modeling
Nathaniel Bastian and Christopher Fisher (United States Military Academy), Andrew Hall (U.S. Military Academy), and Brian Lunday (Air Force Institute of Technology)
The U.S. Army Cyber Proponent (Office Chief of Cyber) within the Army's Cyber Center of Excellence is responsible for making many personnel decisions impacting officers in the Army Cyber Branch (ACB). Some of the key leader decisions include how many new cyber officers to hire and/or branch transfer into the ACB each year, and how many cyber officers to promote to the next higher rank (grade) each year. We refer to this decision problem as the Army's Cyber Workforce Planning Problem. We develop and employ a discrete-event simulation model to validate the number of accessions, branch transfers, and promotions (by grade and cyber specialty) prescribed by the optimal solution to a corresponding stochastic goal program that is formulated to meet the demands of the current force structure under conditions of uncertainty in officer retention. In doing so, this research provides effective decision-support to senior cyber leaders and force management technicians.
Multiple state and non-state actors have recently used social media to conduct targeted disinformation operations for political effect. Even in the wake of these attacks, researchers struggle to fully understand these operations and more importantly measure their effect. The existing research is complicated by the fact that modeling and measuring a persons beliefs is difficult, and manipulating these beliefs in experimental settings is not morally permissible. Given these constraints, our team designed an Agent Based Model that is designed to allow researchers to explore various disinformation forms of maneuver in a virtual environment. This model mirrors the Twitter Social Media Environment and is grounded in social influence theory. Having built this model, we demonstrate its use in exploring two disinformation forms of maneuver: 1) "backing" key influencers and 2) "bridging" two communities.
Smart Public Safety systems have become feasible by integrating heterogeneous computing devices to collaboratively provide public safety services. While the fog/edge computing paradigm promises solutions to address the shortcomings of cloud computing, like the extra communication delay and network security issues, it also introduces new challenges. From system architecture aspect, we adopt the service oriented architecture (SOA) based on monolithic framework is difficult to provide scalable and extensible services in a large-scale distributed Internet of Things (IoT)-based SPS system. Furthermore, traditional management and security solutions rely on a centralized authority, which can be the performance bottleneck or single point of failure. Inspired by the microservices architecture and blockchain technology, a Lightweight IoT based Smart Public Safety (LISPS) framework is proposed on top of the permissioned blockchain network. Through decoupling monolithic complex system into independent sub-tasks, the LISPS system possesses high flexibility in the design process and online maintenance.
Cyber resilience is typically examined in a stove-piped fashion and not well integrated into systems development, operational planning, and acquisition. Cyber resilience encompasses people, processes, technology and data, BUT technology is the typical focus of weapon system cyber assessments. Cyber resiliency requires a mission assessment of system vulnerabilities. However, two unique perspectives exist within the DoD that need to be integrated.
Combatant Commanders are focused on the “so what” for cyber and have concerns that include but are not limited to: Theater network operations, alignment of scheme of maneuver to dynamic key cyber terrain, and a more complex challenge of synchronizing authorities, C2 and warfare integration to disrupt and defend kill chains that impact the scheme of maneuver.
Acquisition focused commands (OSD and services) must concern themselves with Weapon System Resilience (people, processes, and tools) and enhancing cyber vulnerability assessments. Tosupport warfighting requirements, they must gain a deep understanding of mission impacts. When assessing a system of systems architecture (such as GPS), the analysis must be rooted in systems engineering and address:
• Critical nodes that affect mission performance
• Prioritization of risk with a deeper understanding of threats, vulnerabilities, and mission impact
• Rank order prioritization of investments to exploit (or defend) critical nodes
To meet these challenges, traditional qualitative wargames and exercises can be enhanced with a combined approach of systems engineering analysis, threat awareness, and M&S tools to provide a more discrete assessment of offensive cyber impacts and defensively-oriented cyber resilience.
In 1995, Retired Navy Captain Wayne Hughes formulated a salvo model for assessing the military worth of warship capabilities in the missile age. Hughes’ model is deterministic, and therefore provides no information about the distribution of outcomes that result from inherently stochastic salvo exchanges. To address this, Michael Armstrong created a stochastic salvo model by transforming some of Hughes’ fixed inputs into random variables. Using approximations, Armstrong provided closed-form solutions that obtain probabilistic outcomes. This paper investigates Armstrong’s stochastic salvo model using data farming. By using a sophisticated design of experiments to run a simulation at thousands of carefully selected input combinations, responses such as ship losses are formulated as readily interpretable regression and partition tree metamodels of the inputs. The speed at which the simulation runs suggests that analysts should directly use the simulation rather than resorting to approximate closed-form solutions.
Data Farming Services: Micro-Services for Facilitating Data Farming in NATO
Daniel Huber (Fraunhofer IAIS), Gary Horne (MCR Global), Jan Hodicky (University of Defence), Daniel Kallfass (Airbus Defence and Space GmbH), and Nico de Reus (TNO - Netherlands Organisation for Applied Scientific Research)
In this paper the concept and architecture of Data Farming Services (DFS) are presented. DFS is developed and implemented in NATO MSG-155 by a multi-national team. MSG-155 and DFS are based on the data farming basics codified in MSG-088 and the actionable decision support capability developed in MSG-124. The developed services are designed as a mesh of microservices as well as an integrated toolset and shall facilitate the use of data farming in NATO.
This paper gives a brief description of the functionality of the services and the use of them in a web application. It also presents several use cases with current military relevance, which were developed to verify and validate the implementation and define requirements. The current prototype of DFS was and will be tested at the 2018 and 2019 NATO CWIX exercises, already proving successful in terms of deployability and usability.
Counter-Threat Finance Intelligence (CTFI) is a discipline within the U.S. intelligence enterprise that illuminates and prosecutes terrorist financiers and their supporting networks. Relying on voluminous, disparate financial data, efficient and accurate record linkage is critical to CTFI, as successful prosecutions and asset seizures hinge on the association of designated, nefarious entities with financial transactions falling under U.S. jurisdiction. The Jaro-Winkler (J-W) algorithm is a well-known, widely-used string comparator that is often employed in these record linkage problems. Despite J-W’s popularity, there is no academic consensus on the threshold score at which strings should be considered likely matches. In practice, J-W thresholds are selected somewhat arbitrarily, or with little justification. This paper uses a simulative approach to identify optimal J-W thresholds based on an entity pair’s string lengths, thereby improving the lead-discovery process for CTFI practitioners.
The Canadian Armed Forces (CAF) uses a variety of tools to model, forecast and investigate workforce behavior; one such tool is a custom Discrete Event Simulation (DES) framework designed to address military Human Resources (HR) and personnel questions called the Force Flow Model (FFM). In this work we propose a methodology to verify workforce simulations and we discuss its use for verification of the FFM. We demonstrate precise agreement with an analytic model when the FFM parameters are matched to exactly solvable analytic scenarios. The methodology is simple to understand and apply, is generally applicable to workforce simulations, and helps to meet the need to verify simulations by providing known analytic results to benchmark against.
Insights into the Health of Defence Simulated Workforce Systems Using Data Farming and Analytics
Brendan Hill (Deakin University), Damjan Vukcevic (The University of Melbourne), Terrence Caelli (The University of South Australia), and Ana Novak (Defence Science and Technology Group)
This work is motivated by the need for the Australian Defence Force to produce the right number of trained aircrew in the right place at the right time. It is critical to understand the impact of structural/resourcing policies on the ability to maintain squadron capability, both for individual squadrons and jointly across the Force. By combining techniques in experimental design, simulation, data analysis and optimization, we have created an automated system to efficiently identify significant relationships between simulation parameters and squadron capability, as well as propose optimal parameter ranges for each individual squadron. The interplay of competing demands between squadrons was then analysed in the context of the entire Force and the stability of these extrapolations over sampling noise was also evaluated. Finally, we present compact summaries and visualisations of the most important insights about the most significant contributors to the "health" of the whole system.
Increased use of unmanned aerial vehicles (UAV) by the United States Air Force (USAF) has put a
strain on flying training units responsible for producing aircrew. An increase in student quotas, coupled with new training requirements stemming from a transition to the MQ-9 airframe, impact the resources needed to meet the desired level of student throughput. This research uses historical data of a UAV flying training unit to develop a simulation model of daily operations within a training squadron. Current requirements, operations, and instructor manning levels are used to provide a baseline assessment of the relationship between unit manning and aircrew production. Subsequent analysis investigates the effects of course frequency, class size, and quantity of instructors on student throughput. Results from this research recommend novel approaches in course execution to more fully utilize instructor capacity and inform UAV flying training units on appropriate manning levels required to meet USAF needs.
The United States Department of Defense is at a crossroads with respect to simulations. Operational commanders are expecting greater analysis capabilities from the vast amount of simulations supporting their training events and exercises. Current material, personnel, and training solutions are not organized to meet this demand and will require changes. This paper describes the current environment with respect to the training and analysis communities and provides potential areas to leverage existing capabilities in a manner that will provide greater benefit to operational commanders and staffs.
The effects of Diminishing Manufacturing Sources and Material Shortages (DMSMS) can be excessively costly if not addressed in a timely manner. A strategic DMSMS management method that seeks to minimize the overall life-cycle cost of a system, e.g. an aircraft or ship, is presented.
The goal is to select an optimal scheduling of technology refreshes over a fixed lifetime, using lifetime buys as the mitigation option. A DMSMS specific cost model is constructed that accounts for costs of multiple, diverse parts in a system and multiple technology refreshes. This study shows the efficacy of using a ranking and selection method to identify the optimal technology refresh strategy for a complex and stochastic cost function dependent upon varying refresh costs. A visual model is presented which provides the ability to quickly compare other feasible strategies.
Enemy anti-ship cruise missiles (ASCM) are increasing in capability thereby posing a greater threat to United States Navy ships. Core to a ship’s defensive system is a computer-based command and decision element that directs simultaneous operations across a broad set of mission areas. Fortunately, software-only changes to this command element can be evaluated and fielded much more quickly than hardware-based changes; and hence, methods to identify viable software-only changes are needed. This study presents a simulation optimization methodology to identify and evaluate such changes using a notional scenario. First, a raid of ASCM threats against a single ship is simulated and then metaheuristics are used to determine the configuration for a ship’s defensive system that maximizes survival. The simulation scenario, a ship’s defensive system, and three specific optimization cases are presented. Results are provided for each optimization case to show the defensive system configuration that best ensures the ship’s survival.
Often, a system of wide-band satellites is employed for real-time exchange of information and over-the-horizon control, but the communications are prone to denial of service (DoS) attacks, jamming, and delayed redeployment. Hence, a swarm of drones could be deployed in mission-critical operations in times of urgency for a secured and robust distributed-intercommunication which is essential for survivability and successful completion of missions. In this paper, a distributed-agent-based framework for secure and reliable information exchange between a constellation of drones is proposed. The framework comprises a mechanism for path planning simulation and estimation, a flexible network architecture for improved client-server (C/S), peer-to-peer (P2P) connectivity, as well as agents for identity authentications and secure communications. The agents are capable of collating information and images to respectively provide comprehensive messages and views of target sites. The framework has been simulated and verified with results showing promise.
Track Coordinator - Modeling Methodology: Olivier Dalle (Universite Nice-Sophia Antipolis), Richard Fujimoto (Georgia Institute of Technology), Gabriel Wainer (Carleton University)
Modeling Methodology
Methods for Energy Conservation and Sustainability
Chair: Sanja Lazarova-Molnar (University of Southern Denmark)
Simulation of Energy-efficient Demand Response for High Performance Computing Systems
Kishwar Ahmed (University of South Carolina Beaufort) and Jason Liu (Florida International University)
Energy consumption is a critical issue for high-performance computing (HPC) systems. Demand response is a program offered by power service providers to reduce energy consumption. In this paper, we present the simulation of an energy-efficient economic demand-response model based on contract theory. Simulation is developed to examine the effectiveness of the demand-response model in a variety of real-world scenarios. Results from simulation show that the model can achieve energy efficiency through the rewarding mechanism of the contract-based design. In particular, the HPC users can be properly compensated to ensure their willing participation in energy reduction.
High-performance computing facilities used for scientific computing draw enormous energy, some of them consuming many megawatt-hours. Saving the energy consumption of computations on such facilities can dramatically reduce the total cost of their operation and help reduce environmental effects. Here, we focus on a way to reduce energy consumption in many ensembles of simulations. Using the method of simulation cloning to exploit parallelism while also significantly conserving the computational and memory requirements, we perform a detailed empirical study of energy consumed on a large supercomputer consisting of hardware accelerator cards (graphical processing units, GPUs). We build on previous insights from mathematical analysis and implementation of cloned simulations that result in computational and memory savings by several orders-of-magnitude. Using instrumentation to track the power drawn by thousands of accelerator cards, we report significant aggregate energy savings from cloned simulations.
NECADA infrastructure supports the execution of a simulation model of buildings or urban areas, taking care of environmental directives and international standards in the design process. The aim of this simulation model is to optimize the entire life cycle of the system from the point of view of sustainability (environmental, social and economic impacts), taking care of the comfort and climate change to achieve a Nearly Zero Energy Building. Due to the huge amount of factors to be considered, the number of scenarios to be simulated is huge, hence the use of optimization and specifically heuristics, is needed to get an answer in a reasonable time. This project aims to analyze the accuracy of two of the most used metaheuristics in this area. To do so we base our analysis in an extensive dataset obtained from a brute force execution, which represents a typical dataset for this kind of problem.
The Last to Enter Service (LES) delay announcement is one of the most commonly used delay announcements in queueing theory because it is quite simple to implement. Recent research has shown that using a convex combination of LES and the conditional mean delay are optimal under the mean squared error and the optimal value depends on the correlation between LES and the virtual waiting time. To this end, we show using simulation that it is important to be careful when using finite queue sizes, especially in a heavy traffic setting. Using simulation we demonstrate that the correlation between LES and the virtual waiting time can differ from heavy traffic results and can therefore have a large impact on the optimal announcement. Finally, we use simulation to assess the value of giving future information in computing correlations with virtual waiting times and show that future information is helpful in some settings.
Using Systems Dynamic Modeling, we propose a novel formulation that considers workers as a decision variable with other parameters that are beholden to known, but random demand. Previous applications within the literature focus on estimating manpower to meet production demand for a particular process. We extend that work by including system constraints that are more realistically represent the problem under consideration. The proposed model was used to find configurations of workers, shifts, and work stations that achieve a minimized deviation between output and demand while maintaining a near constant workforce. The system under consideration is a manufacturing environment and the model posits the production of certain product lines that are each composed of a series of disparate operations. The model was tested using real production data and the results show that Systems Dynamic Modeling is an effective method in estimating the long-run resource requirements for the variable demand profiles.
This work intends to present an analytical and simulation model for an online reservation system and examine its performance associated with its bottlenecks. The modeling and evaluation of reservation systems should be carried out through analytical models, whenever these models are available. However, it is known that in general, such models have approximations that do not consider several details of the real world. Because the multiprocessing system in consideration uses database queries (and also uses preemption resume priority) and has a certain degree of complexity, our approach uses an approximate analytical model that is validated by a discrete-event simulation model (and vice-versa). Both models are supported by data obtained from measurements of a real-world system. Once validated, the model can be analyzed for other input data and distributions through simulation.
Internet of Things (IoT) refers to a paradigm in which all objects can send information and collaborate with their computing resources through the Internet. The combination of Fog and Cloud Computing defines a distributed system composed of heterogeneous resources interconnected by different communication technologies. Despite its theoretical capacity, using these computational resources poses a challenge to distributed applications and scheduling policies. In this work, we show the initial steps in developing a tool to support the creation of scheduling policies combining simulation and validation. We show the details to be considered when selecting and configuring the different layers of software. To evaluate the proposal, we use a segmentation method in both platforms and a theoretical model to predict the total compute time. Our results show that both simulation and validation platforms agree in the obtained results which also can be explained in terms of a theoretical model.
In this paper we focus on Internet-based simulation, a form of distributed simulation in which a set of execution units that are physically located around the globe work together to run a simulation model. This setup is very challenging because of the latency/variability of communications. Thus, clever mechanisms must be adopted in the distributed simulation, such as the adaptive partitioning of the simulated model and load balancing strategies among execution units. We simulate a wireless model over a real Internet-based distributed simulation setup, and evaluate the scalability of the simulator with and without the use of adaptive strategies for both communication overhead reduction and load-balancing enhancement. The results confirm the viability of our approach to build Internet-based simulations.
Cyber Physical Systems (CPS) and Internet of Things (IoT) communities are often asked to test devices regarding their effects on underlying infrastructure. Usually, only one or two devices are given to the testers, but hundreds or thousands are needed to really test IoT effects. This proposition makes IoT Test & Evaluation (T&E) cost and management prohibitive. One possible approach is to develop a digital twin of the IoT device and employ many replicas of the twin in a simulation environment comprising of various simulators that mimic the IoT device’s operational environment. Cyber attack experimentation is a critical aspect of IoT T&E and without such a virtual T&E environment, it is almost impossible to study large scale effects. This paper will present a digital twin engineering methodology as applicable to IoT device T&E and cyber experimentation.
The paper presents an approach for integrating Lean management tools, value-added analysis and Yamazumi charts, with simulation modeling in order to analyze the behavior of certain types of production systems. This is accomplished through the development of a set of high-level, operational, task-oriented instructions that are based on production engineering means and language. This is in contrast to the more traditional simulation modeling approach that uses lower-level, logic-oriented instructions that are based on the means and language of computer programming. The paper defines the approach and its implementation and its application is illustrated through a simple manufacturing cell example.
Classical Parallel Discrete Event Simulation (PDES) increases simulation performance by distributing the computational load of a simulation application across multiple computing platforms. A burden is placed on the developer to partition the simulation. This work focuses instead on identifying available parallelism within the simulation executive, thus removing the burden from the developer. Event scheduling and event execution have natural parallelism with each other; future events may be scheduled with the simulation executive while the currently executing event continues. Execution and scheduling are dealt with as multiple threads, taking advantage of multicore architectures. This paper identifies the available parallelism between event scheduling and execution, highlights points of contention between the two, provides an algorithm to take advantage of the parallelism and presents timing results demonstrating the performance benefits.
A Digital TV-based Distributed Image Processing Platform for Natural Disasters
Manuel Manriquez (Universidad de Santiago de Chile); Fernando Loor and Veronica Gil-Costa (Universidad Nacional de San Luis, CONICET); and Mauricio Marin (Universidad de Santiago de Chile)
After a natural disaster strikes people spontaneously respond by self-organizing, providing food and drink to the victims and to emergency response teams. During this process, people also share photos, messages and videos which can be used to improve the general understanding of the situation and to support decision-making. In this context, we propose to use digital television to create a community of digital volunteers who can help to identify objects inside images that cannot be processed automatically. Digital television can help to reach a larger number of digital volunteers because it can be easily used without installing special applications. We present a distributed platform composed of a server, a network of digital volunteers and an internet service provider. Our proposed platform aims to reduce the communication between the server and the digital volunteers and to reduce the workload of the server. We simulate our proposed platform with the peersim tool.
Discrete-Event Simulation (DES) is a technique in which the simulation engine plays a history following a chronology of events in which the processing of each event takes place at discrete points of a continuous timeline. The simulator must interact actively with time variables for reproducing the chronology of events over positive real numbers, which is usually represented by approximated datatypes as floating-point. Nevertheless, the approximation made by commonly used datatypes in simulations can affect the timeline, preventing the generation of correct results. To overcome this problem, we present two new versatile datatypes to represent time variables in DES. These new datatypes provide a wider range of numbers reducing approximation errors, and if an error occurs, the simulation user is notified. To test our datatypes, we perform an empirical evaluation in order to compare their performance.
Developing a realistic agent-based model of human migration requires particular care. Committing too early to a specific model architecture, design, or language environment can later become costly in terms of the revisions required. To examine specifically the impact of differences in implementation, we have developed two instances of the same model in parallel. One model is realized in the programming language Julia, the underlying execution semantics is of a discrete stepwise stochastic process. The other is realized in an external domain-specific language ML3, based on a continuous-time Markov chain (CTMC) semantics. By developing models in pairs in different approaches, important properties of the target model can be more effectively revealed. In addition, the realization within a programming language and an external domain-specific modeling language respectively, helped identifying crucial features and trade-offs for the future implementation of the model and the design of the domain-specific modeling language.
Towards Adaptive Abstraction in Agent-based Simulation
Romain Franceschini (University of Corsica, University of Antwerp); Simon Van Mierlo (University of Antwerp); and Hans Vangheluwe (University of Antwerp, McGill University)
Humans often switch between different levels of abstraction when reasoning about salient properties of systems with complex dynamics. In this paper, we study and compare multiple modelling and simulation techniques for switching between abstractions. This improves insight and explainability as well as simulation performance, while still producing identical answers to questions about properties. Traffic flow modelled using an Agent Based Simulation formalism is used to demonstrate the introduced concepts. The technique requires explicit models (1) of the dynamics of both individual cars and of emergent “jams”, (2) of the conditions –often involving complex temporal patterns– under which switching between the levels of abstraction becomes possible/necessary and (3) of the state initialization after a switch. While aggregation is natural when going from detailed to abstract, the opposite direction requires additional state variables in the abstract model.
Doing things “in the cloud” has become ubiquitous and the “cloud” has become a rich platform for the use of modeling and simulation (M&S) anywhere and anytime to provide solutions to complex problems. Creation of an Integrated Development Environment (IDE) for building and executing visual M&S applications “in the cloud” poses significant technical challenges. This paper presents an architecture and a design of a cloud-based visual M&S IDE for the example problem domain “traffic networks”. The IDE consists of integrated software tools that provide computer-aided assistance in the composition and visualization of simulation models under a web browser on a client computer while the simulation model is being executed on a server computer. Based on a client-server architecture enabling distributed multitiered M&S development, the design employs an asynchronous visualization protocol with efficient resource utilization. The architecture and design can assist researchers and developers to create other cloud-based visual M&S IDEs.
Towards a Deadline-based Simulation Experimentation Framework using Micro-services Auto-scaling Approach
Anagnostou Anastasia, Simon J. E. Taylor, and Nura Tijjani Abubakar (Brunel University London); Tamas Kiss, James DesLauriers, Gregoire Gesmier, and Gabor Terstyanszky (University of Westminster); and Peter Kacsuk and Jozsef Kovacs (MTA SZTAKI)
There is growing number of research efforts in developing auto-scaling algorithms and tools for cloud resources. Traditional performance metrics such as CPU, memory and bandwidth usage for scaling up or down resources are not sufficient for all applications. For example, modeling and simulation experimentation is usually expected to yield results within a specific timeframe. In order to achieve this, often the quality of experiments is compromised either by restricting the parameter space to be explored or by limiting the number of replications required to give statistical confidence. In this paper, we present early stages of a deadline-based simulation experimentation framework using a micro-services auto-scaling approach. A case study of an agent-based simulation of a population physical activity behavior is used to demonstrate our framework.
Hardware-assisted Incremental Checkpointing in Speculative Parallel Discrete Event Simulation
Stefano Carnà, Serena Ferracci, Emanuele De Santis, and Alessandro Pellegrini (Sapienza, University of Rome) and Francesco Quaglia (University of Rome Tor Vergata, Lockless S.r.l.)
Nowadays hardware platforms offer a plethora of innovative facities for profiling the execution of programs. Most of them have been exploited as tools for program characterization, thus being used as kind of program-external observers. In this article we take the opposite perspective where hardware profiling facilities are exploited to execute core functional tasks for the correct and efficient execution of speculative Parallel Discrete Event Simulation (PDES) applications. In more detail we exploit them—specifically, the ones offered by Intel x86-64 processors—to build a hardware-supported incremental checkpointing solution that enables the reduction of the event-execution cost in speculative PDES compared to the software-based counterpart. We integrated our solution in the open source ROOT-Sim runtime environment, thus making it available for exploitation.
Simulation is typically a result of a tremendous amount of work performed by experts in various fields, usually in computer science, mathematics and the corresponding application area. The currently uncomplicated accessibility to data provides a significant opportunity to reduce the requirements for expert knowledge in some aspects, or at least to only utilize expert knowledge to supplement and validate data-derived models, or, vice versa, use collected data to confirm and validate existing expert knowledge. In this paper we explore the idea of derivation of simulation models from data. We, furthermore, survey and summarize related existing efforts for the most popular simulation paradigms, identifying benefits, opportunities and challenges, as well as discuss the ways in which the traditional simulation study processes are impacted.
Simulation studies make use of various types of simulation experiments. For users, specifying these experiments is a demanding task since the specification depends on the experiment type and the idiosyncrasies of the used tools. Thus, we present an experiment generation procedure that guides users through the specification process, and abstracts away from the concrete execution environment. This abstraction is achieved by (1) developing schemas that define the properties of simulation experiments and dependencies between them, (2) specifying a mapping between schema properties and template fragments in the specification language of a target backend. We develop schema, template fragments, and mappings for stochastic discrete-event simulation, and show how the concepts can be transported to a different domain of modeling and simulation. Further, we expand the developed simple experiment schemas by a schema for experiment designs, and generate executable sensitivity analysis experiments, thereby demonstrating versatility and composability of our approach.
Modeling and simulation have been around for years and its application to study several different systems and processes have proven its practical importance. Various research has sought to optimize its performance and capabilities, but few address the issues of generating realistic inputs for simulating into the future. In this paper, some issues in the commonly used simulation flow were identified and deep learning was introduced to enhance realism by learning historical data progressively, so as to generate realistic inputs to a simulation model. We focus on improving the input generation phase and not the model of the system itself. To the best of our knowledge, this is the first work that realizes the possibility of integrating deep learning models directly into simulation models for general-purpose applications. Experiments showed that the proposed methods are able to achieve higher overall accuracy in generating input sequences as compared to current state-of-art.
Track Coordinator - Networks and Communications: Stenio Fernandes (Element AI), Jason Liu (Florida International University)
Networks and Communication
Networking and Communications
Chair: Jason Liu (Florida International University)
Validating Agent-based Models of Large Networked Systems
Madhav Marathe, Abhijin Adiga, Chris Barrett, Stephen Eubank, Chris Kuhlman, Henning Mortveit, S.S. Ravi, Daniel J. Rosenkrantz, Richard E. Stearns, Samarth Swarup, and Anil Vullikanti (University of Virginia)
The paper describes a systematic approach for validating real-world biological, information, social and technical (BIST) networks. BIST systems are usually represented using agent-based models and computer simulations are used to study their dynamical (state-space) properties. Here, we use a formal representation called a graph dynamical system (GDS). We present two types of results. First we describe four real-world validation studies spanning a variety of BIST networks. Various types of validation are considered and unique challenges presented by each domain are discussed. Each system is represented using the GDS formalism. This illustrates the power of the formalism and enables a unified approach for validation. We complement the case studies by presenting new theoretical results on validating BIST systems represented as GDSs. These theoretical results delineate computationally intractable and efficiently solvable versions of validation problems.
The next-generation of railway systems are increasingly adopting new and advanced communication technology to boost control efficiency. The challenges of this transition arising from the mission-critical and time-critical nature of railway control systems are to meet specific requirements, such as continuous availability, (close to) real-time operations, function correctness despite operational errors and growing cyber-attacks. In this work, we develop S3FTCN, a simulation platform to evaluate the on-board train communication network (TCN). S3FTCN is composed of a parallel discrete event simulation kernel and a detailed packet-level TCN simulator. S3FTCN demonstrates good scalability to support large-scale network experiments using parallel simulation. We also conduct a case study of a double-tagging VLAN attack to illustrate the usability of S3FTCN.
Software defined networking (SDN) is a technology for management of
computer networks. We are interested in SDNs where the switches are mounted on vehicles,
the controller is airborne, and connectivity changes
dynamically as a result of vehicular movement. The challenge of interest is that when a controller fails
the switches are left with whatever configuration they had at the point of failure. Without a controller,
as the switches move and links become unuseable, flows configured to use those links are broken
unless the configuration has anticipated such events and provided alternative routes.
We describe a high fidelity SDN network simulator which uses snapshots of actual switch configurations and analyzes
their resilience to impacts of controller failure. We propose an algorithm for backup paths
which is guaranteed to not cause loops, and conduct a simulation study to understand the types of domains where have the greatest impact on network performance.
Optimizing the inputs for simulation models has been a topic of interest in the OR community for many years. Similarly, Monte Carlo methods for optimizing even deterministic functions have been subjects of interest in the statistics and computing communities for at least the same length of time. Recent interest in such global optimization problems as arise in deep neural networks and the implications for learning and exploiting complex relationships has only intensified these interests. This talk will discuss frameworks to unify the views of these developments from different perspectives and particularly to address issues that arise in dynamic learning, optimization, and statistical inference.
A new simulator of a Twitter recommender system for suggestion of a followee (i.e., a user to follow) is proposed in this paper. This simulator simulates a distributed processing environment such as cloud computing for parallel running of a recommendation algorithm that can be used to test both scalability and accuracy of the employed algorithm. By simulation and comparison of three algorithms (similarity, clustering, and neural network), the similarity and clustering algorithms have higher accuracy compared to the machine learning-based algorithm. To test each algorithm, Twitter user’s tweets were changed to TF·IDF and the bag-of-words NLP approaches. In parallel run of employed algorithms by adding additional processing nodes, the scalability of recommender system is measured. This paper shows that the simulator of recommender system developed in this work, is an effective tool for testing scalability and accuracy of the algorithms employed in Twitter recommendation systems when deployed in cloud computing.
Hoisting process is becoming the most challenging area to improve efficiency in the shipyard due to its project type nature that hinders hoisting process from employing existing modeling & simulation methods achieved by other types of manufacturing process. In order to solve this problem, a new dedicated modeling and simulation method is proposed, it applies four views with task centric idea to reflect particularity of project type hoisting process, focusing on work flow perspective instead of material flow, allowing integrated simulation of both information activity and working activity by introducing information task to the task view. Through this method the shop manager can not only predict on-site performance like common simulation scenarios, but also make it possible to evaluate communication scenario, which is very important to project type manufacturing process.
This work introduces three case studies of an IoT network that is simulated using discrete-event model. In the first case, the network operates without resource redundancy. The second case introduces link redundancy with an emergency gateway, and the third case extends the first with
macro-cells. The main performance parameters under analysis are the processor utilization and the network global delay. The model and its implementation may be used to plan, configure and dimensioning of the network in the face of possible catastrophic events where key network resources may be lost.
Does the use of DES differ when implementing novel production processes and technologies in the manufacturing industry? Addressing this question, this paper explores the use of DES during the design of production systems implementing process innovations. Data is drawn from a qualitative-based case study in the heavy vehicle industry where a process innovation was implemented in the form of a multi-product production systems. The findings of this study reveal novel findings related to the purpose and challenges in the use of DES models during the design of production systems when implementing process innovations.
We analyze a tree search problem with an underlying Markov decision process, in which the goal is to identify the best action at the root that achieves the highest cumulative reward. We present a new tree policy that optimally allocates a limited computing budget to maximize a lower bound on the probability of correctly selecting the best action at each node. Compared to the widely used Upper Confidence Bound (UCB) type of tree policies, the new tree policy presents a more balanced approach to manage the exploration and exploitation trade-off when the sampling budget is limited. Furthermore, UCB assumes that the support of reward distribution is known, whereas our algorithm relaxes this assumption.
The objective of this paper is to present a study of the reaction of selected performance parameters, such as carried traffic and blocking probability, in a wireless cell environment with fixed capacity, under multi-service intense traffic variation, as in mass communications events of unexpected traffic growth, by means of simulation experiments.
In disasters, a big challenge of the healthcare systems is the evacuation and mobility of patients. Hospital evacuation simulation considering patients with different mobility characteristics, needs, and interactions, demands a microscopic modeling approach, like Agent-Based Modeling (ABM). However, as the system increases in size, the models become highly complex and intractable. Large-scale complex ABMs can be reduced by reformulating the micro-scale model of agents by a meso-scale model of population densities and partial differential equations, or a macro-scale model of population stocks and ordinary differential equations. In this study, crowd dynamics considering people with different physical and mobility characteristics is modeled on three different scales: microscopic (ABM), mesoscopic (fluid dynamics model), and macroscopic (system dynamics model). Similar to the well-known Predator-Prey model, the results of this study show the extent to which macroscopic and mesoscopic models can produce system-level behaviors emerging from agents’ interactions in ABMs.
Rare-event probabilities and risk measures that quantify the likelihood of catastrophic or failure events can be sensitive to the accuracy of the underlying input models, especially regarding their tail behaviors. We investigate how the lack of tail information of the input can affect the output extremal measures, in relation to the level of data that are needed to inform the input tail. Using the basic setting of estimating the probability of the overshoot of an aggregation of i.i.d. input variables, we argue that heavy-tailed problems are much more vulnerable to input uncertainty than light-tailed problems. We explain this phenomenon via their large deviations behaviors, and substantiate with some numerical experiments.
We analyse the equilibrium behaviour of a large network of banks in presence of incomplete information, where inter-bank borrowing and lending is allowed, and banks suffer shocks to assets. In a two time period graphical model, we show that the equilibrium wealth distribution is the unique fixed point of a complex, high dimensional distribution-valued map. Fortunately, there is a dimension collapse in the limit as the network size increases, where the equilibriated system converges to the unique fixed point involving a simple, one dimensional distribution-valued operator, which, we show, is amenable to simulation. Specifically, we develop a Monte-Carlo algorithm that computes the fixed point of a general distribution-valued map and derive sample complexity guarantees for it. We numerically show that this limiting one-dimensional regime can be used to obtain useful structural insights and approximations for networks with as low as a few hundred banks.
The forecasting of traffic conditions is of great concern to many people around the world every day. The prediction of travel times based on historical data, road congestion, weather, and special events helps not only individuals in their daily driving, but also helps companies and government entities in their planning as well. Many approaches have been used for traffic forecasting, from time series analysis to artificial intelligence techniques to simulation. We propose that combining approaches through machine learning can yield better results than each model can produce on its own.
We consider multiobjective simulation optimization problems with heteroscedastic noise, where we seek to find the non-dominated set of designs evaluated using noisy simulation evaluations. To perform the search of competitive designs, we propose a metamodel-based scalarization method, which explicitly characterizes both the extrinsic uncertainty of the unknown response surface, and the intrinsic uncertainty inherent in stochastic simulation. To differentiate the designs with the true best expected performance, we propose a multiobjective ranking and selection approach that accounts for correlation between the mean values of alternatives. Empirical results show that the proposed methods only require a small fraction of the available computational budget to find the optimal solutions.
Atherosclerotic cardiovascular disease (ASCVD) is among the top causes of death in the US. Although research has shown that ASCVD has genetic elements, the understanding of how genetic testing influences its prevention and treatment has been limited. To this end, we develop a simulation framework to estimate the risk for ASCVD events due to clinical and genetic factors. We model the health trajectory of patients stochastically and determine treatment decisions with and without genetic testing. Since the cholesterol level of patients is one controllable risk factor for ASCVD events, we model cholesterol treatment plans as a Markov decision process. By simulating the health trajectory of patients, we quantify the effect of genetic testing in optimal cholesterol treatment plans. As precision medicine becomes increasingly important, having a simulation framework to evaluate the impact of genetic testing becomes essential.
Despite the documented benefits of ridesourcing services, recent studies show that they can slow down traffic in the densest cities significantly. To implement congestion pricing policies upon those vehicles, regulators need to estimate the degree of congestion effect. This paper studies simulation-based approaches to address the two technical challenges arising from the representation of system dynamics and the optimization for congestion price mechanisms. To estimate the traffic state, we use a metamodel representation for traffic flow and a numerical method for data interpolation. To reduce the burden of replicating evaluation in stochastic optimization, we use a simulation-optimization approach to compute the optimal congestion price. This data-driven approach can potentially be extended to solve large-scale congestion pricing problems with unobservable states.
Simulation studies make use of various types of simulation experiments. Specifying these experiments is a demanding task since the specification depends on the experiment type and the idiosyncrasies of the used tools. Thus, we present an automatic experiment generation procedure that uses experiment schemas to describe simulation experiments in a tool-independent manner. We apply the concepts to two different domains of modeling and simulation (M&S) to illustrate how simulation experiment specifications go beyond specific tools and applications.
Decision-makers are continuously facing the challenge of maintaining logistics networks in a competitive condition by implementing actions, e.g., changing the transport structure within the network. Therefore, logistics assistance systems are increasingly being used to support the decision-making process, e.g., by identifying and proposing promising actions. A typical feature of this type of assistance system is, however, that possible actions are predefined by the system. Adaptations to existing actions or the implementation of new actions is regularly a problem. To solve this problem, the author has developed a new method for the formal description and modelling of user-generated actions and their transformation in changes to simulation models. The method is based on a semantic model for mapping these actions, which are parameterized by a novel domain-specific language.
We consider the problem of estimating the output variance in simulation analysis that is contributed from the statistical errors in fitting the input models, the latter often known as the input uncertainty. This variance contribution can be written in terms of the sensitivity estimate of the output and the variance of the input distributions or parameters, via the delta method. We study the direct use of this representation in obtaining efficient estimators for the input-contributed variance, by using finite-difference and random perturbation to approximate the gradient, focusing especially in the nonparametric case. In particular, we analyze a particular type of random perturbation motivated from resampling. We illustrate the optimal simulation allocation and the simulation effort complexity of this scheme, and show some supporting numerical results.
Hospitals would like to improve efficiency due to an aging population and increasing expenditures. This
research is designed to address the dynamic and stochastic appointment scheduling problem with patient
unpunctuality and provider lateness. Our model considers a single provider with whom patients seek to
schedule appointments dynamically on a first-come, first-serve basis. The problem setting captures the
uncertainty for the number of patients request appointments, service time durations, patient unpunctuality,
provider lateness, patient no-show, unscheduled emergency walk-ins. The aim is to find the optimal schedule
start times for the patients in a cost effective manner. We propose a discrete-event framework model to
evaluate the sample path gradient of the total cost. We use sample average approximation and stochastic
approximation algorithms based on unbiased gradient estimators to achieve computational efficiency. We
also present the stochastic linear formulation. Our numerical experiments suggest that these approximation
algorithms converge to a global optimum.
Smart City initiatives are ongoing around the world to improve the quality of life by leveraging technological advances. In a Smart City, equipped with connected infrastructure, traffic data, such as vehicle detections and intersection signal indications, are expected to be received in (near) real-time. The objective of the real-time simulation platform highlighted in this effort is the creation of a dynamic data-driven simulation that leverages high frequency connected data streams to derive meaningful insights about the current traffic state and real-time corridor environmental measures. A connected infrastructure data-driven simulation model driven in near-real-time with high frequency connected infrastructure traffic volume and signal controller data streams is developed. The model visualizes key traffic and environmental performance measures at near-real time, providing dynamic feedback. This paper provides an overview of the architecture and utilizes sensitivity analysis to explore the impact of volume data imputations on the real-time data-driven simulation produced performance measures.
Residents are one of the key resources in healthcare settings as their participation affected patient care activities. They are not only student in training, but also provider for medical services. The competing mission in a residency programs require assignment policies that is parallel with the clinic performance goals. This paper evaluate resident assignment by determining ideal number of residents to physician for different patient volumes using discrete event simulation. Results show that while resident assignment can be made based on patient volume, it also depends on the goals of the clinic as well as provider utilization in the clinic.
In human processes, like the Integrated Disability Evaluation System (IDES), the variability in execution of thousands of interpersonal encounters will limit the systemic predictive capability of traditional modeling methods such as regression. To combat this limitation, we developed an agent-based model that replicates every step in the IDES process and simulates the associated human actions. In effect, our model simulates a digital twin of every human involved in the process. Analysis of model outputs shows that performance metrics of individual agents in the simulation are similar to their real-world counterparts, and that aggregate system performance is highly accurate. The success of this simulation model allows for increased confidence in the predictive accuracy of what-if analysis conducted on human processes, where process changes may be modeled to inform policy recommendations.
Towards An Agent-based Model for Managing Recreational Fisheries of Western Baltic Cod
Kevin Haase (Thünen Institute); Oliver Reinhardt and Adelinde M. Uhrmacher (University of Rostock); and Marc Simon Weltersbach, Wolf-Christian Lewin, and Harry V. Strehlow (Thünen Institute)
The removal of fish biomass by marine recreational fishery (MRF) for some fish species can be substantial.
This makes its inclusion into stock management necessary.
The impact of policy changes on angler behavior is, however, difficult to predict.
Agent-based modeling and simulation appears as a promising approach, as it allows to include different types of agents with their individual decision strategies and social networks.
Biomass removal by MRF was recently considered in stock assessment for western Baltic cod, but without consideration of angler behavior.
Rich unexploited data sources are available, as data about anglers and catches has been collected for years, including surveys about the anglers decision making.
Based on this data, we will develop an agent-based model of recreational fisheries for western Baltic cod.
We will apply methods for managing the provenance of simulation models to make the foundations of the model, including assumptions and data, explicit.
Agent-based modelling (ABM) can show how interventions influence a society. Social contagion, a subfield of ABM, focuses on the spread of opinions throughout a social network in which the existence of ties dictates how opinions spread. Complexity can be added to social contagion models by utilizing attributes inherent to the agents, such as a tolerance for change or exposure to counter-information. Social Judgement Theory is a persuasion theory that utilizes ‘latitudes’ to determine how an agent will react to another’s opinion. The agents’ opinion will either not change, move towards, or move away from the other agents’ opinions depending on how similar their opinions are. Here we explore how the latitudes of social judgement theory can be utilized in a social contagion model to observe the change of opinions in a social network.
Agent-based models often have large parameter spaces that cannot be fully explored. How can we determine the roles parameters play in defining the resulting simulation behavior? We propose the use of covering arrays, often used in test case generation for software testing, to strategically decrease the search area in simulation parameter space exploration. We test our approach on the Wolf-Sheep Predation model, and find that the number of parameter combinations need to be explored may still be high for some models. We propose next steps for the use of covering arrays in exploring an agent-based model's parameter space.
An agent-based simulation model is proposed to explore an unknown tree with arbitrary edge distances by a set of robots which are initially located at the root of the tree and expected to return to the root after all nodes are explored. The proposed algorithm depends only on local information stored at nodes using a bookkeeping token. We show that the proposed algorithm with the Earliest Selection Policy is superior to the Random Selection Policy. The agents-based simulation can be used to study general cases of network and tree exploration problems by multiple robots.
With the rise of IoT devices, there is an increasing demand for embedded control systems. Discrete-Event Modeling of Embedded Systems (DEMES) is a Discrete Event System Specification (DEVS) based model driven development methodology that aims to increase reliability and improve time to market by simplifying the development and testing of embedded systems. There are problems with the existing toolchain that is being used to implement this design methodology which have been addressed and improved in Real-Time (RT) Cadmium, a RT-DEVS kernel developed on top of the Cadmium (DEVS) Simulator. RT-Cadmium is a complete DEVS simulator and RT kernel, allowing users to simulate and deploy their project with ease. The tool already supports MBed Enabled ARM microprocessors, however the process for porting it to new target platforms is simple and documented. RT-Cadmium can also handle asynchronous events, which do not exist in standard DEVS simulators.
Many DEVS extensions have been suggested for parallel simulation execution, but their modeling semantics often became much complex leading to a difficulty in model development. This paper proposes a new simulation method for parallel simulation execution of classic DEVS models. Specifically, the proposed method identifies events to be processed concurrently using a graph representing the whole event paths, and an event-oriented scheduling is applied to managing the parallel simulation execution.
The Discrete Event System Specification (DEVS) is a very extended formalism to precisely define discrete event systems and simulate them. As more complex systems are developed, the performance of DEVS models is becoming a crucial subject. To execute a DEVS model, protocols were designed for the classical DEVS model and for the Parallel DEVS model (PDEVS) but no work have formalized the cost to execute a model with these protocols. In this work we propose a definition of the cost and a new metric to analyze model’s executions.
Complex considerations to plan, design, operate, control and monitor wood supply chains challenged by increasingly frequent natural disasters such as windstorms and forest fires intensify the need for knowledge transfer between science, industry and education. Discrete event simulation provides a powerful method for decision support, but simulation models are rarely used in university education and industrial training mainly because they are complicated and customized for scientific use only. In addition, within science, documentation of highly specific simulation models provides mainly a rough overview, failing to facilitate external expert evaluation and valuable feedback. Consequently, a scientific discrete event simulation model extending from forest to industry was further developed with special focus on animation, visualization and intuitive usability in a workshop setting. Tested in several workshops, it proved to facilitate decision support for managers and to provide means to train students and sensitize researchers on how to deal with challenging supply situations.
Due to insufficient resources, arbitrary decision making, and other challenges implementing effective screening processes, the burden of global cervical cancer has fallen disproportionately on low and middle-income countries. Despite many countries adopting new modern diagnostic procedures, the implementation of these programs lags far behind the policy changes and risks failure during their early stages. To mitigate these risks in an ongoing implementation of new screening processes in the Iquitos region of Peru, we propose using a discrete event simulation model (DES) to model the initial roll-out of a proven screening process. The DES model will yield insights into potential resource utilization and appropriate staff hours for various performance (coverage) scenarios, and support stakeholders in making appropriate decisions as they resource the implementation of this new screening policy.
In machine learning problems with a large feature space, non-informative or redundant features can significantly harm the performance of the prediction models, keep the model training costly, and the model interpretability weak. Traditional feature selection methods, preformed often through greedy search, are susceptible to suboptimal solutions, selection bias, and high variability due to noise in the data. Our Sample Average Approximation framework looks for the best subset of features by utilizing bootstraps of the training set, where the random holdout errors are viewed as simulation outputs. We implement the proposed Simulation Optimization with Genetic Algorithms, noting that this framework is generalizable with any other solver on the integer space. Experimentations examine the effect of fixed versus variable and adaptive replication sizes upon estimating the performance of each subset. We provide promising results of higher accuracy and more robustness in solution size and content at the cost of longer computation.
Cities require goods and related logistics services, which has economic, political, environmental, and social implications. The research develops an simulation-optimization model for the implementation of automated parcel lockers (APL) as one solution for urban logistics (UL) operations. First, we consider a system dynamics simulation model (SDSM) to evaluate and predict the system behaviour in the macro planning level. Second, a facility location problem (FLP) is used for the micro-planning level to decide how many APL to open or close and where to locate them to minimize the total cost. We consider the dynamic behaviour of APL variables using the SDSM outputs as a inputs of an FLP and vice-versa. The aim of the model is to prevent an implementation failure of APL as a sustainable solution for the cities.
Stochastic Mixed-Integer Nonlinear Programming Model for Drone-Based Healthcare Delivery
Janiele Custodio (The George Washington University, School of Engineering and Applied Sciences) and Miguel Lejeune (The George Washington University, School of Business)
This paper proposes a mixed integer set covering model with congestion and chance constraints to design a network of drones to deliver defibrillators in response to out-of-hospital cardiac arrest (OHCA) occurrences. The incident rate of OHCAs is assumed to be random and congestion in the system is modeled as a network of M/G/1 queues. Both the arrival and service rates are determined endogenously in the model, which results is a stochastic nonconvex mixed integer nonlinear location-allocation problem with decision-dependent and exogenous uncertainty. We derive a Boolean-based (Lejeune and Margot 2016) deterministic reformulation
and use a simulation-optimization solution approach to solve the stochastic model.
This work use the concept of “component importance” in the field of reliability to measure and simulation. The simulation of system of components or equipment allow to analyse and optimize the maintenance work. The aim of this work is to obtain the best inspection intervals and to optimize cost of maintenance. This approach intend to contribute to change the classical reliability analysis performed in industrial companies. New models of reliability with simulation can be adopt and in this work the linear models of consecutive k-out-of-n is used. Finally the modeling and simulation is applied in chemical industry and more particularly in oil centrifugal pumps.
Hybrid Simulation-based Optimization for the Second-generation Biofuel Supply Chain Network Design
Sojung Kim (Texas A&M University-Commerce), Su Min Kim (Oak Ridge Institute for Science and Education), Evi Ofekeze (Texas A&M University-Commerce), and James R. Kiniry (USDA-Agricultural Research Service)
The goal of this study is to contribute to commercialization of the second-generation cellulosic biofuels (SGCBs) by reducing its operational cost. A hybrid simulation-based optimization approach is devised to design a cost-effect SGCB supply chain. The proposed approach adopts a two-stage approach consisting of feedstock yield estimation and location-allocation of feedstock storages between farms and refineries. Agricultural Land Management Alternative with Numerical Assessment Criteria (ALMANAC) model has been adopted for precise estimation of potential yield of feedstock such as ‘Alamo’ switchgrass regarding environmental dynamics (e.g., growth competition and weather). In addition, agent-based simulation (ABS) implemented in AnyLogic® is utilized to estimate operational cost of a SGCB supply chain. The simulation-based optimization with adaptive replication (AR) is devised to find an appropriate SGCB network design in terms of operational cost without causing heavy computational demand. The approach is applied to a SGCB network design problem in Southern Great Plains of U.S.
Fashion is one of the world’s most important industries, driving a significant part of the global economy representing, if it was a country, the seventh-largest GDP in the world in terms of market size. Focusing on the footwear industry, assembly line balancing and sequencing represents one of the more significant challenges fashion companies have to face with. This paper presents the results of a simulation-optimization framework implementation in such industry, highlighting the benefits of the use of simulation together with a finite capacity scheduling optimization model. The developed simulation-optimization framework includes the conduction of a scenario analysis that compares production KPIs (in terms of average advance, delay and resource saturation) related to different scenario that include or not one or more type of stochastic events (i.e. rush orders and/or delays in the expected critical components delivery date).
This study proposes a new global optimization algorithm (TRIM) for expensive black-box functions subject to evaluation noise. We use radial basis functions (RBFs) as surrogate and extend the Stochastic Response Surface (SRS) method to functions with noisy evaluations. To balance the trade-off between exploration, exploitation and accuracy of evaluated points, we sequentially select the evaluation points based on three specific metric functions. Case studies show that the proposed algorithm is more effective in finding the optimal solution than other alternative methods.
We consider optimization of composite objective functions, i.e., of the form $f(x)=g(h(x))$, where $h$ is a black-box derivative-free expensive-to-evaluate function with vector-valued outputs, and $g$ is a cheap-to-evaluate real-valued function. While these problems can be solved with standard Bayesian optimization, we propose a novel approach that exploits the composite structure of the objective function to substantially improve sampling efficiency. Our approach models $h$ using a multi-output Gaussian process and chooses where to sample using the expected improvement evaluated on the implied non-Gaussian posterior on $f$, which we call expected improvement for composite functions (EI-CF). Although EI-CF cannot be computed in closed form, we provide a novel stochastic gradient estimator that allows its efficient maximization. We also show that our approach is asymptotically consistent, generalizing previous convergence results for classical expected improvement. Numerical experiments show that our approach dramatically outperforms standard Bayesian optimization benchmarks.
This paper incorporates a robust resampling method called the uncertainty evaluation (UE) procedure into particle swarm optimization (PSO) to improve its performance in a noisy environment. The UE procedure allows PSO to find the global best correctly by allocating a limited number of samplings to each particle effectively. In addition, compared with other resampling methods, the high robustness to noise of the UE procedure enhances the performance of PSO in complex problems involving large stochastic noise and many local optima, which is demonstrated in the results of comparative experiments on some benchmarks.
Despite the recent advances in rule based workload control (WLC) mechanisms, recent semiconductor literature has neglected them, although it was shown that they outperform most other periodic and continuous order release models. Therefore, this paper compares the most widely used and considered best performing WLC models: the LUMS-COR, with its equivalent from the semiconductor literature, the Starvation Avoidance (SA) method in combination with different scheduling rules. We compare the performance of these approaches by using a simulation model of a scaled-down wafer fabrication facility. The results show that the LUMS-COR outperforms the SA model with regard to total costs and timing measures. In comparison, the LUMS-COR model releases orders later which reduces the holding costs of finished goods inventory (FGI) and yields only slightly higher average shop floor time and lateness measures.
The manufacturing processes of semiconductor foundry (FAB) are grouped into two, and they are called as the front-end processes and the back-end processes, respectively. Due to the development of semiconductor technology, the back-end processes have become more complex according to the new requirements of customers. In this presentation, a simulation-based scheduler is introduced which is highly customized to a Korean back-end semiconductor foundry. The different heuristic dispatching rules considering various practical constraints are applied to different processes, and the preliminary experiments indicate that the system is efficient in the pilot line.
In the semiconductor manufacturing industry, cluster tools are widely used for most wafer fabrication process such as photolithography, etching, deposition, and even inspection. To improve the performance of semiconductors, the wafer circuit width has been shrunk dramatically. Since this makes complexity of scheduling problems high, a sophisticated simulation model is needed to test and verify the various scheduling method. Most of the previous research have been focused on Vacuum Module (VM). However, the scheduling problem of Equipment Front-End Module (EFEM) has recently been emphasized, and bottleneck begins to occur also in EFEM. Therefore, in this study, we propose a new modeling method that includes EFEM which was not considered in the past.
A Simheuristic Approach to Solve Tactical Airport Ground Service Equipment Planning
Yagmur S. Gök and Maurizio Tomasella (University of Edinburgh), Daniel Guimarans (Monash University), and Cemalettin Ozturk (United Technologies Research Center)
In this study we work on a tactical level airport decision problem: daily allocation of Ground Service Equipment based on the flight schedule as well as potential foreseeable deviations from the original operations plan. We integrate simulation within an overall Simulation-Optimization framework, falling within the family of so-called Simheuristics, to deal with the uncertain factors of the problems, such as flight delays or resource availability . We contribute to the literature by proposing a feedback mechanism from simulation back to optimization. The tactical level problem is essentially a discrete combinatorial problem, where metaheuristics will be used to attack the problem in a timely manner and reach reliable solutions, the robustness of which is evaluated thanks to the inclusion of simulation at specific steps of the overall methodology.
The day-ahead electricity market (DAM) is the central planning mechanism for dispatching electric power. In a perfectly competitive market, generating firms’ bids equal their marginal cost of operation. The recently restructured electricity market, however, is an oligopolistic market at best, where a few entrenched utilities can exercise market power to manipulate the DAM price. Traditionally, such a market is modeled as reaching an optimal, Nash equilibrium price for electricity. We simulate market players’ bidding strategies via reinforcement learning. We show that a Q-learning algorithm accurately models the Nash equilibrium, no matter the number of Nash equilibria. However, it takes players over one year of experimenting with bidding strategies to achieve these optimal outcomes. Future work is focused on replicating this result with real market data from the New York Independent System Operator (NYISO), in order to assess market power and the existence of Nash equilibria in the real world.
Traditionally, simulation has used data about the real system for input data analysis or within data-driven model generation. Automatically extracting behavioral descriptions from the data and representing it in a simulation model is a weak point of these approaches. Machine learning, on the other hand, has proven successful in extracting knowledge from large data sets and transform it into more useful representations. Combining simulation approaches with methods from machine learning seems therefore promising. By representing some aspects of a real system by a traditional simulation model and others by a model generated from machine learning, a hybrid system model is generated. This extended abstract suggests a specific hybrid system model that incorporates a deep learning method for predicting high-resolution time series of power usage of machining jobs according to the control code the machine is operated on.
As recent cities are getting diverse and complex, a method for developing urban policies for a city needs to become systematic and sophisticated. In this sense, data-oriented policy making can be one feasible approach. This paper proposes a simulation-based decision making tool for urban policies. Using the proposed framework, it is expected that users evaluates their political ideas and discovers an optimized policy for their objectives before implementing real polices.
As network-centric operating environments require the interworking of manned-unmanned weapon systems, there is an increasing need to analyze its capabilities and effectiveness in combat simulation. In this paper, we suggest a cooperative engagement behavior model of manned-unmanned systems for engagement-level combat simulation. Using the proposed model, we demonstrate reconnaissance behaviors in various areas under a small unit combat scenario and shows the analytic results of combat effectiveness.
This poster shows the development and the application of a LEGO® Manufacturing System (LMS) at the Department of Mechanical Engineering of Politecnico di Milano (Milan, Italy). The LMS exploits LEGO® MINDSTORMS® components to model a production line. The miniaturized system is an innovative platform both for research and didactic purposes thanks to high flexibility and easy reconfiguration.
Healthcare-associated infections (HAIs), particularly multi-drug resistant organisms, are a major source of morbidity and mortality for patients. We developed a multiscale computational model of transmission to explore the effectiveness of interventions to reduce HAIs. We utilized a state-level patient mix database from Maryland to develop a meta-population model that simulates the movement network of patients between acute hospitals, long-term care facilities, and communities on a regional scale. On a local scale, we modeled the population at each facility and community using an epidemiological compartmental model that included susceptible, colonized, and infectious patients and their transitions between each compartment state. The local models are embedded in a multiscale model in order to simulate transmission patterns of multi-drug resistant organisms and the incidences of healthcare-associated infections for each distinct population. We then assessed the effectiveness of policy interventions, such as using an electronic patient registry or increasing active surveillance.
Wildfire simulation require of many geographical and georeferenced datasets. Integrating all of the processes for simulation into a GIS seems natural, but most of the solutions available are closed source, do not use standards and can’t be arranged or modified to cover new requirements such as the integration of fire suppression activities into a fire spread simulator. GisFire is an under development open source tool integrated into QGIS that uses the standard OGC formats, uses GIS APIs and provides a set of APIs and procedures to interact with a fire spread simulator. The GisFire APIs also provides a set of tools to evaluate different fire spread simulators, different fire suppression resources management policies and strategies and evaluate personal or economical risks. This poster presents the design of the fire spread simulation tool integrated into QGIS to focus research into wildfire simulation and analysis.
For decades, Modeling & Simulation (M&S) have been the choice for Operations Research and Management Science (OR/MS) to analyze systems behaviors. The evolution of M&S brings the Distributed Simulation (DS) via High-Level Architecture (HLA), used mainly by defense applications, thus allowing researchers to compose models which run on different processors. As the cloud computing grows, its capabilities upgraded many applications, including M&S having the elasticity needed for DS to seed up simulation by bringing reusable models together with seamless interoperability. This paper presents a framework for composing and executing Hybrid Distributed Simulation in the cloud.
The goal of this research is to develop digital twin model-based simulation analysis model for productivity and failure analysis of micro testbed. Targeting the micro testbed which is a miniaturized manufacturing line for mass customization, cyber-physical system-based digital twin model is constructed. Using the digital twin model, simulation models for productivity analysis which contains throughput, bottleneck, and machine availability analysis and failure analysis which contains machine tool wear and machine failure analysis are developed. Dashboard which visualizes the analysis results is also developed.
Spreadsheet software are commonly available and used. There are numerous uses and methods available to spreadsheet software, but there are limitations in their capabilities. Due to these limitations, spreadsheet software were compared with a simple dynamic simulation model. Both methods were validated against values from literature and results showed the same trends, with some differences in the values. The simulation model increases possible future development such as including locations and real transport distances and having realistic harvesting times. Wider usability of simulation models does offer more future development possibilities allowing more thorough research of the subject.
Neural Input Modeling (NIM) is a novel generative-neural-network framework that exploits modern data-rich environments to automatically capture complex simulation input distributions and then generate samples from them. Experiments show that our prototype architecture NIM-VL, which uses a variational autoencoder with LSTM components, can accurately and automatically capture complex stochastic processes. Outputs from queuing simulations based on known and learned inputs, respectively, are also statistically close.
In this paper, we propose an approach that combines jackknifing and randomized quasi-Monte Carlo in infinitesimal perturbation analysis (IPA) to estimate quantile sensitivities, which improves the accuracy and precision of the classical IPA estimator. Theoretical properties of the new estimators are provided, and numerical examples are presented to illustrate the effectiveness of the new estimators.
Physicians are critical resources for hospitals. The scheduling of physicians has great impact on the hospital efficiency and the duration of patient treatment. In this research, we consider the physician scheduling problem in the pretreatment phase for cancer patient. The goal is to generate a weekly cyclic physician schedule that shortens the pretreatment duration of patients. Due to the high uncertainties associated with the patient arrival day, profile and type of cancer, the problem is a stochastic simulation optimisation problem. We propose a stochastic tabu search approach. The tabu search is coupled with several computing budget allocation techniques to efficiently use the simulation budget to mitigate the effect of noise. Experiment results show that the proposed approach is able to obtain high quality solution with a large saving of simulation budget.
Earth moving operations are a critical component of construction and mining industries with a lot of potential for optimization and improved productivity. In this paper we combine discrete event simulation with reinforcement learning (RL) and neural networks to optimize these operations that tend to be cyclical and equipment-intensive. One advantage of RL is that it can learn near-optimal policies from the simulators with little human guidance. We compare three different RL methods including Q-learning, Actor-Critic, and Trust Region Policy Optimization and show that they all converge to significantly better policies than human-designed heuristics. We conclude that RL is a promising approach to automate and optimize earth moving and other similar expensive operations in construction, mining, and manufacturing industries.
A Generic Simulation Model for Selecting Fleet Size in Snow Plowing Operations
Yipeng Li and Shuoyan Xu (University of Alberta); Zhen Lei (University of New Brunswick); and Lingzi Wu, Simaan AbouRizk, and Tae Kwon (University of Alberta)
Accumulated snow on roads poses a threat to traffic systems and rouses significant safety concerns. Snow plowing is often used to recover roads in the event of heavy snow. Due to the unpredictability of weather conditions, it is difficult to determine the overall performance of a certain truck fleet size, thus make it challenging to estimate the number of snow plow trucks needed for a given highway area. The objective of this research is to estimate the truck fleet performance under uncertain weather conditions, and to provide decision support for selecting a reasonable fleet size. A generic simulation model is developed in the Simphony.NET environment. Weather, road network, and truck speed data are entered as inputs, and Monte Carlo simulation is used to generate random snow events to quantify the performance. A case study is developed and presented to demonstrate the practicality and feasibility of the proposed model.
This paper aims to propose a novel deep learning-integrated framework for deriving reliable simulation input models through incorporating multi-source information. The framework sources and extracts multi- source data generated from construction operations, which provides rich information for input modeling. The framework implements Bayesian deep neural networks to facilitate the purpose of incorporating richer information in input modeling. A case study on road paving operation is performed to test the feasibility and applicability of the proposed framework. Overall, this research enhances input modeling by deriving detailed input models, thereby, augmenting the decision-making processes in construction operations. This research also sheds lights on prompting data-driven simulation through incorporating machine learning techniques.
Videogrammetry has demonstrated great potential in structural health monitoring (SHM), and there has been sustained interest towards applying this non-contact technique in SHM. This paper focuses on the effect of temperature variation on the measurement accuracy of videogrammetry to examine its feasibility in deformation/displacement monitoring. A long-term indoor videogrammetric measurement test was conducted, and the performance of the videogrammetric displacement monitoring technique under ambient temperature was examined. The result showed that temperature variation does cause non-negligible errors which contain not only daily fluctuation pattern but also overall trend. In terms of daily fluctuation pattern, the horizontal measurement error and indoor air temperature are in satisfactory consistency, while the vertical measurement error are not. In terms of overall trend, the vertical measurement error is highly correlated with indoor air temperature, having a positive linear relationship between them. However, the horizontal one has a complicated pattern when temperature varies.
Comprehending how ground incidents and accidents arise and propagate during air traffic control is vital to both the efficiency and safety of air transportation systems. Historical data about airport operational events capture some processes about how improper collaboration among air traffic controllers and pilots result in incidents, accidents, and delays, but could hardly cover all possible air traffic control failures in various environmental conditions. The computational simulation could use historical data to identify repetitive aircraft movement behaviors and air traffic control processes shared in multiple air traffic control sessions and create stochastic models that represent the probabilities of the occurrence of certain aircrafts movements and control events under various contexts. This paper synthesizes four scenarios of aircraft operations around ramp areas during air traffic peak time for supporting the development of a quantitative spatiotemporal simulation that can help predict air traffic control risks based on historical data.
Bridges are important links in the US infrastructure system, and various inspection activities at different frequencies are needed to maintain and preserve bridges at an acceptable level of service. Recent guidelines by Federal highway administration (FHWA) have mandated state agencies to inspect and manage bridges at an element level. To obtain element-level inspection data, different resources are needed for mobilizing, cleaning, and accessing all elements of a bridge, creating complication in optimizing resource allocation for inspection activities. This study proposes a simulation-based, easy-to-use planning tool for bridge inspection to effectively estimate the minimum resources required to complete all the FHWA mandated inspections using four attributes (deck area, inspection frequency, structure type and scour critical) from bridge inventory database. The results of the simulation model are capable of showing resource utilization under different scenarios, which supports the planning of element-level bridge inspection to ensure optimum resource allocation.
With the continuous development of industrialization in building construction, modular construction and off-site prefabrication methods have been applied much more thoroughly and comprehensively to achieve higher efficiency and better quality control as the major building components are able to be produced in a factory setting, which reduces the influence of uncontrolled factors. This paper, while employing discrete event simulation, uses Simpony.NET tool to model the process of transporting modules and assembling them on the construction site of a future multi-residential project. By adding more details, such as the weather and traffic conditions, the simulation results can become more accurate. In addition, as most simulation models for modular construction processes focus mainly on the assembly of modules on site, this paper also quantified the weather influence in terms of project duration and manpower utilization. Furthermore, the simulation model could also provide a general guide for the comparison of various scenarios.
This paper addresses a practical planning problem in bridge steel girder fabrication in an attempt to illuminate why the identified problem does not lend it well to existing solutions for construction planning. A simulation-based approach is presented for project scheduling and production planning at a structural steel fabrication shop. The shop simultaneously produces girders for various clients in construction of multiple bridges. Particular emphasis is placed on how to interpret and represent simulation outputs in terms of customized schedules of various details so as to cater to the needs of different stakeholders involved at multiple management levels. The applicability of the proposed approach is demonstrated with a case study based on real-world settings.
Human navigation simulation is critical to many civil engineering tasks and is of central interest to the simulation community. Most human navigation simulation approaches focus on the classic psychology evidence, or assumptions that still need further proofs. The overly simplified and generalized assumption of navigation behaviors does not highlight the need of capturing individual differences in spatial cognition and navigation decision-making, or the impacts of diverse ways of spatial information display. This study proposes the visual attention patterns in floorplan review to be a stronger predictor of human navigation behaviors. To set the theoretical foundation, a Virtual Reality (VR) experiment was performed to test if visual attention patterns during spatial information review can predict the quality of spatial memory, and how the relationship is affected by the diverse ways of information display, including 2D, 3D and VR. The results set a basis for future prediction model developments.
Safety incidents are an expected part of construction; their occurrence leads to cost and schedule overruns, and sometimes severe worker injury. Incidents can be reduced through mitigation in the planning phase. To reduce safety incidents, proper quantification of risk impact is necessary throughout the project. Incorporating continuously occurring safety risks within a project network schedule represented by discrete activities remains challenging. This paper proposes a methodology for building a combined discrete-event continuous simulation model to predict safety-incident related schedule delays. An algorithm uses project information stored in a database to automatically construct a Critical Path Method (CPM) network in Simphony.NET. The model simulates daily risk occurrence using a discrete-event process to model reduced project productivity. An illustrative example is explained, and results are compared with a base case CPM network. Activities having a higher risk of incident occurrence are recorded to help practitioners develop proper mitigation strategies during project planning.
Public sentiment is a direct public-centric indicator for planning effective actions. Despite its importance, systematic and reliable modeling of public sentiment remains untapped in previous studies. This research aims to develop a Bayesian-based approach for quantitative public sentiment modeling, which is capable of incorporating the inherent uncertainty of interviewed dataset. This study comprises three steps: (1) quantifying prior sentiment information and new sentiment observations with Dirichlet distribution and multinomial distribution respectively; (2) deriving the posterior distribution of sentiment probabilities through incorporating the Dirichlet distribution and multinomial distribution via Bayesian inference; and (3) measuring public sentiment through aggregating sampled sets of sentiment probabilities with an application-based measure. A case study on Hurricane Harvey is provided to demonstrate the feasibility and applicability of the proposed approach. The developed approach also has the potential for modeling all types of probability-based measures.
As decisions models involving Monte Carlo simulations become more widely used and more complex, the need to organize and share components of models increases. Standards have already been proposed which would facilitate the adoption and quality control of simulations. I propose a new pseudo-random number generator (PRNG) as part of those standards. This PRNG is a simple, multi-dimensional, counter-based equation which compares favorably to other widely used PRNGs in statistical tests of randomness. Wide adoption of this standard PRNG may be helped by the fact that the entire algorithm fits in a single cell in an Excel spreadsheet. Also, quality control and auditability will be helped because it will produce the same results in any common programming language regardless of differences in precision.
The metalog probability distributions can represent virtually any continuous shape with a single family of equations, making them far more flexible for representing data than the Pearson and other distributions. Moreover, the metalogs are easy to parameterize with data without non-linear parameter estimation, have simple closed-form equations, and offer a choice of boundedness. Their closed-form quantile functions (F^-1) enable fast and convenient simulation. The previously unsolved problem of a closed-form analytical expression for the sum of lognormals is one application. Uses include simulating total impact of an uncertain number N of risk events (each with iid [independent, identically distributed] individual lognormal impact), noise in wireless communications networks and many others. Beyond sums of lognormals, the approach may be directly applied to represent and subsequently simulate sums of iid variables from virtually any continuous distribution, and, more broadly, to products, extreme values, or other many-to-one change of iid or correlated variables.
Using a model as an input data source for integration into another model carries with it risks to the validity of the model composition. This paper presents research into the inherent risks of model integration. The research decomposes models into sets of semantic concepts allowing for a calculation of structural alignment. Measurable changes in a model’s output due to the integration of another model provide an impact assessment. Risks to decisions arise from incompatible assumptions and constructions of models. We present a risk assessment as a tuple containing differences in models’ alignment across three axes and in changes in a model’s output metrics. Risks to decisions arise from incompatible assumptions and constructions of models.
Safety requirements based on Probability of Exceedance (POE) take into account only probabilities, but do not control magnitude of outcomes in the tail of the probabilistic distribution. The requirements do not constrain very large outcomes leading to industrial catastrophes. The paper shows how to upgrade safety requirements with the Buffered Probability of Exceedance (bPOE). The bPOE risk function equals a tail probability with known mean of the tail (i.e., it is a probability of the tail such that the mean of the tail equals to the specified value). The paper considers two application areas: 1) Credit ratings of financial synthetic instruments (e.g., AAA, AA, ... credit ratings); 2) Materials strength certification (A-basis, B-basis).
The design, construction, and operation of civil infrastructure that is more environmentally, socially, and economically responsible over its life cycle from extraction of raw materials to end of life is increasingly desirable worldwide. This paper presents a probabilistic framework for the design of civil infrastructure that achieves targeted improvements in quantitative sustainability indicators. The framework consists of two models: (i) probabilistic service life prediction models for determining the time to infrastructure repair, and (ii) probabilistic life cycle assessment (LCA) models for measuring the impact of a given repair. Specifically, this paper introduces a new mathematical approach, SIPmath, to simplify this design framework and potentially accelerate adoption by civil infrastructure designers. A reinforced concrete bridge repair in Norway is used as a case study to demonstrate SIPmath implementation. The case study shows that SIPmath allows designers to engage in sustainable design using probabilistic methods using the native, user-friendly Microsoft Excel® interface.
Analyzing Emergency Evacuation Strategies For Large Buildings Using Crowd Simulation Framework
Imran Mahmood, Talha Nadeem, and Faiza Bibi (Center for Research in Modeling & Simulation (CRIMSON), National University of Sciences and Technology) and Xiaolin Hu (Department of Computer Science, Georgia State University)
The occurrence of natural or man-made emergencies can be quite complex and demand flawless preparedness, through tested strategies, in order to ensure the safety of the individuals. For large-scale infrastructures, whether commercial or residential, having a reliable evacuation strategy is crucial. Formulation and evaluation of these evacuation strategies is however a daunting challenge. In this paper, we propose a Crowd Simulation and Analysis framework for the formulation and evaluation of effective evacuation strategies in large buildings, using real-scale building structures and agent based approach. We further propose an algorithm to devise an evacuation strategy. We first demonstrate the functionality of our algorithm using a simplistic example and then apply the algorithm in a campus evacuation case study using three scenarios. The main goal of this research is to assist regulatory authorities in developing effective disaster management plans through the use of M&S methods and tools.
Chair: David Poza (INSISOC - University of Valladolid)
Stakeholder-centric Analyses of Simulated Shipping Port Disruptions
Gabriel Arthur Weaver (University of Illinois at Urbana-Champaign), Glen R. Salo (Heartland Science and Technology Group), and Mark Van Moer (University of Illinois at Urbana Champaign)
The Maritime Transportation System is crucial to the global economy, accounting for more than 80% of global merchandise trade in volume and 67% of its value in 2017. Within the US economy alone, this system accounted for roughly a quarter of GDP in 2018. This paper
defines an approach to measure the degree to which individual stakeholders, when disrupted, affect the commodity flows of other stakeholders and the entire port. Using a simulation model based on heterogeneous datasets gathered from fieldwork with Port Everglades in FL, we look at the effect of varying the timing and location of disruptions, as well as response actions, on the flow of imported commodities. Insights based upon our model inform how and when stakeholders can impact one another's operations and should thereby provide a data-driven, strategic approach to inform the security plans of individual companies and shipping ports as a whole.
Supply chain disruptions can lead to immense financial losses for affected enterprises. Quantitative models which analyze the impact of disruptions and the effect of possible mitigation strategies on the overall network are needed to support the decision making process of practitioners. Therefore, we present an agent-based model of a supply network with eleven entities to analyze the benefits of dynamic pricing when confronted with material flow disruptions of different durations of two producers in the network. Our results show that dynamic pricing can reduce the financial burden of the total supply network but can also lead to strong interferences which entities are most affected by the disruption.
Risk analysis is one of the most challenging task in a business process management perspective. In healthcare, risk management focuses on events having a relevant impact on patients safety. We investigate the context of a blood-bank, which is one of the most critical hospital department by comparing two different modeling approaches to perform a risk-aware business process management. Our main interest here is to discuss advantages and disadvantages of discrete-event and agent-based modeling approaches. We adopt two specific tools and collected feedback from modelers, staff and decision makers. We identify differences in the analysis of risks from the two perspectives, such as the possibility to include agents-environment interactions, as well as structured approaches to discover potential failures. Our results include assessment for risk management, shedding some light on practical applications of process modeling and simulation in healthcare.
Modelling the spatio-temporal dynamics and evolution of collision risk in an airspace is critical for multiple purposes. First, the model could be used to diagnose the critical points where the level of risk escalated due to particular airspace configurations and/or events. Second, the model reveals information on the rate of risk-escalation in an airspace, which could be used as a risk indicator in its own right. The aim of this paper is to present an Airspace Collision Risk Simulator for safety assessment. This is achieved by developing a fast-time simulator with a suitable fidelity for spatial-temporal analysis of collision risk using clustering methods. This simulator integrates the airspace model, aerodynamic model and traffic flow model for collision risk computation and visualisation. The proposed simulator is the first attempt to provide an insight into the evolution of collision risk for airspace safety assessment with potential benefits to the operational environment.
Automatic Dependent Surveillance-Broadcast (ADS-B) has become one of the key technologies in modernizing the National Airspace System (NAS) by enhancing air traffic surveillance and improving situational awareness. In this study, we used the Global Oceanic Model - a fast-time computer simulation tool- to evaluate the potential workload of air traffic controllers (ATC) using satellite-based ADS-B and reduced separation standards in the North Atlantic (NAT) oceanic airspace. This study uses two metrics to quantify the potential ATC workload. First, the number of tactical resolution maneuvers (changing flight levels, changing Mach number and lateral deviations) used by ATC to resolve potential conflicts. Second, the number of events that ATC need to monitor when aircraft pairs are located within a protection boundary of 50% above the separation minima. Our results show that equipping aircraft with ADS-B technologies can reduce the ATC workload and increase the efficiency and capacity in the NAT airspace.
This paper presents a simulation-based study of the performance implications of spacing buffers in the United States’ Air Traffic Control System. A spacing buffer is used during instrument flight operations, when radar control is active. The buffer represents additional spacing between successive flights beyond what is required so that separation violations rarely, if ever, occur. In contrast, during non-radar visual operations, spacing at or near the minimum occurs when pilots request and are granted visual clearances. Using published data derived from multilateration surveillance systems, this study describes a simulation experiment created to understand the performance implications of actual separations for controlled flights when instrument flight rules are active. The results show, unsurprisingly, that there is a statistically meaningful increase in delays as the spacing buffer increases. These results also demonstrate how, during weather-triggered radar operations, the air traffic system at busy airports can become congested and delays escalate.
According to a report by the World Health Organization (WHO) in 2018, 1.35 million people die each year due to road accidents globally. Post-accident care plays a very crucial role in reducing fatalities. In a country like India, it is becoming increasingly difficult to provide post-accident services on time with an increase in congestion. In this paper, we propose a system which decreases the post-accident response time of Emergency Medical Services (EMS) in India by adding another layer of the patient transport vehicle. The paper discusses a new system design with simulation model and algorithm. Further, when compared with the traditional system, it provides an overall time reduction of approximately 3 minutes with a 97% survival rate.
Based on the decision risk caused by both the ambiguity of emergency information and the large group preference conflict, a risk minimizing method of large group decision is proposed. First, the decision group is clustered by preferences to form aggregation preference matrix. Second, the interval-valued intuitionistic fuzzy (IVIF) distance is proposed in form of intuitionistic fuzzy (IF) number in order to reduce the loss of preference information. And generalized IVIF number is also defined. Combining with prospect theory, the IF prospect matrix of different cluster is obtained by conversion. Then, a optimization model of large group decision fuzzy conflict entropy is constructed. Prospect matrix and attribute’s weight are aggregated to figure out the comprehensive prospect values which decide the ranking of alternatives. Finally, a case analysis of Coal-mine Engineering Water-penetration Accident Rescue and comparison are used to illustrate the rationality and effectiveness of above method.
Track Coordinator - Scientific Applications: Esteban Mocskos (University of Buenos Aires (AR), CSC-CONICET), Sergio Nesmachnow (Universidad de la República)
Scientific Applications
Numerical simulations
Chair: Esteban Mocskos (University of Buenos Aires (AR), CSC-CONICET)
The Effect of Symmetric Permutations on the Convergence of a Restarted GMRES Solver with ILU-type Preconditioners
Sanderson L. Gonzaga de Oliveira and C. Carvalho (UFLA) and Carla Osthoff (LNCC)
This paper is concerned with applying heuristics for bandwidth reduction as a preprocessing step of a restarted Generalized Minimal Residual (GMRES for short) solver preconditioned by ILU-type preconditioners. Hundreds of heuristics have been proposed to solve the problem of bandwidth reduction since the mid-1960s. Previous publications have reviewed several heuristics for bandwidth reduction. Based on this experience, this paper evaluates nine low-cost symmetric permutations. Numerical simulations are presented to investigate the influence of these orderings on the convergence of the preconditioned GMRES solver restarted every 50 steps when applied to large-scale nonsymmetric and not positive definite matrices. This paper shows the most promising combination of preconditioner and ordering for each linear system used.
This paper shows the results yielded by several numerical methods when applied to the 1-D modified Burgers' equation. In particular, the paper evaluates a new hybrid method against ten high-order methods when applied to the same equation. The novel numerical method for convection-dominated fluid or heat flows is based on the Hopmoc method and backward differentiation formulas. The results highlight that the new hybrid method yields promising accuracy results with regards to several existing high-order methods.
This article presents parallel multithreading self-gravity simulations for astronomical agglomerates, applying High Performance Computing techniques to allow the efficient simulation of systems with a large number of particles. Considering the time scales needed to properly simulate the processes involved in the problem, two parallel mesh-based algorithms to speed up the self-gravity calculation are proposed: a method that updates the occupied cells of the mesh, and a method to divide the domain based on the Barnes-Hut tree. Results of the experimental evaluation performed over a scenario with two agglomerates orbiting each other indicate that the Barnes-Hut allows accelerating the execution times over 10× compared to the occupied
cells method. These performance improvements allow scaling up to perform realistic simulations with a large number of particles (i.e., tens of millions) in reasonable execution times.
With increased complexity of customers’ choice behaviors, practical optimization approaches often involve decomposing a network revenue management problem into multiple single-leg problems. While dynamic programming approaches can be used to solve single-leg problems exactly, they are not scalable and require precise information about the customers' arrival rates. On the other hand, the traditional heuristics are often static which do not explicitly consider the remaining time horizon into the optimization. This motivates us to find scalable and dynamic heuristics that work well with the complex customers’ choice models. We develop two expected marginal seat revenue type heuristics for the single-leg dynamic revenue management problems in airline industry and evaluate their performances using Monte Carlo simulation. The initial simulation results indicate that our proposed heuristics are computationally efficient and fairly robust. This study provides a foundation for potential future extensions to solve larger network problems.
More than two decades ago, Butler and Finelli examined the problem of experimentally demonstrating the reliability of safety critical software and concluded that it was impractical. We revisit this conclusion in the light of recent advances in computer system virtualization technology and the capability to link virtualization tools to simulation models of physical environments. A specific demonstration of testing for reliability is offered using software that is part of a building control system. Extrapolating the results of this demonstration, we conclude that experimental demonstrations of high reliability may now be feasible for some applications.
By fabricating Nb films on top of array of Ni nanodots with different geometries, the vortex lattice for specific values of the external applied magnetic field is modified by the array of periodic pinning potentials. In this work, a GPU-based code developed from scratch simulating this phenomenon is presented. It evaluates the vortex–vortex and the vortex–nanodot interactions providing the total interaction between vortices and pinning sites, as well as the position of the vortices in the array unit cell. This final position is obtained with two stochastic processes (simulated annealing, Basin Hopping) being able to simulate square, rectangular, or triangular arrays of nanodefects of different size. A computational performance study is also made.
The creation of a new modeling and simulation engineering program presented the opportunity to evaluate the core skills desirable in a well-rounded graduate. Software was identified as an often neglected aspect of modeling and simulation programs and an attempt was made to remedy this. This paper discusses the software skills identified as necessary/desirable in a graduate. The focus is on discrete event simulation (DES), though the skills are transferable to other paradigms. The discussion is partitioned into discussing skills appropriate for simulation application development and simulation tool development. The discussion is further partitioned to discuss core computer science skills (object-oriented programming and data structures), software architecture, graphics development, implementing DES worldviews, and an ability to work with open source software. The result is a graduate that is desirable to industry and graduate research.
Systems engineering is a bit different from other engineering disciplines in that students from many disciplines are enrolled in the program. Therefore, the objective is not to teach a simulation subject in depth, but rather to introduce the students to different techniques so that they can work with and manage simulation staff on a project. However, they need some “hands on” experience so that they know how challenging simulations can be, avoiding the trap of underestimating the effort involved. This paper describes the approach used at Georgia Tech to teach a compressed 7 week simulation survey course called ASE 6003 Modeling & Simulation in Systems Engineering. We describe the techniques used, our approach and the results achieved over recent years of teaching simulation in this format. Finally we discuss lessons learned and offer suggestions for others interested in offering a similar course.
Active Learning Experience In Simulation Class Using A LEGO@-Based Manufacturing System
Giovanni Lugaresi (Politecnico di Milano); Ziwei Lin (Politecnico di Milano, Shanghai Jiao Tong University); and Nicla Frigerio, Mengyi Zhang, and Andrea Matta (Politecnico di Milano)
Simulation classes have the main advantage of deeply involving and stimulating students through intensive work in computer laboratories and projects. The counterpart is often the lack of the real system that is subject to simulation modeling. Creating, building and validating a simulation model of a system that cannot be observed represent a real obstacle for student learning. In this paper, we describe the experience from an educational project launched in a course of manufacturing systems for mechanical engineering students in which discrete event simulation plays a fundamental role in performance evaluation. The project has been designed to exploit student interaction with a LEGO–based physical system. Students have the possibility to learn from the physical system and making experiments together with the simulation model built during project activities. The project details are also described with the hope that the project becomes a simulation case study and be replicated in other courses.
This paper takes three different viewpoints on the problems faced in the development of Modeling and Simulation (M&S) educational material. Two educational environments – academic and commercial – are discussed. The paper starts with a discussion on the establishment of the undergraduate M&S engineering program at Old Dominion University. The paper then explores the education in a commercial environment, discussing the challenges and possible connections between the two environments. The associated issues which have arisen in the M&S community, especially the accessibility problem, are discussed through the outcomes of a workshop. There is a strong relationship between M&S education and the ease of determining M&S usefulness (i.e., the evidence of success in M&S applications). This paper advocates for detailed case studies to be developed and used within the M&S classrooms. The more case studies (both successful and non-successful), the better the chance to understand different types of complex problems.
This paper is focused on the issue of how one can get Discrete Events Simulation (DES) more used in Business Schools, since this type of simulation allows students to do interesting simulation projects in companies. We list a great number of project topics that we in four countries have given our business students and which they have been able to carry out successfully as part of an introductory course, requiring one month of work. This list of topics can hopefully give also other students ideas for project work. For some of these topics we also bring out some interesting details.
This paper aims to raise current simulation teaching practices and identify the challenges and opportunities for improvement in simulation education. A survey was carried out with authors and chairs of Simulation Education tracks of the Winter Simulation Conference editions from 2000 to 2017. The results highlighted the primary practices, difficulties, and opportunities for improvement, raised by professionals that currently teach or used to teach simulation in undergraduate courses in engineering, computer science, and business administration, among others. Two issues highlighted in the survey are the balance between theory and practice, and the role of simulation projects as a tool to consolidate the learning process in simulation education. We found that professors value simulation projects in the discipline and the importance of working with real-world problems. Finally, the present survey identified a concern within the academic community in discussing and improving the process of teaching and learning simulation.
This paper presents two Monte Carlo simulations that we use in our Operations Management course to support the teaching of Lean concepts. Students are experienced and inexperienced, international and domestic, and technically savvy and technically challenged, and the course is taught both online and on campus. The educational aims of the simulations are to teach our students: (1) the concept of variation and its impact on service system throughput; (2) how capacity buffers work and that they can add significant cost; and (3) how Lean approaches that remove non-value-added activities can result in customer experience improvements without adding significant cost. We achieve these aims through the use of two service system
simulations – one that can be done manually and the other that uses Excel.
Track Coordinator - Simulation Optimization: Juergen Branke (Warwick Business School), Susan R. Hunter (Purdue University), Peter Salemi (The MITRE Corporation)
Simulation Optimization
Ranking and Selection
Chair: Chun-Hung Chen (George Mason University)
Balancing Optimal Large Deviations in Ranking and Selection
Ye Chen (Virginia Commonwealth University) and Ilya O. Ryzhov (University of Maryland)
The ranking and selection problem deals with the optimal allocation of a simulation budget to efficiently identify the best among a finite set of unknown values. The large deviations approach to this problem provides very strong performance guarantees for static (non-adaptive) budget allocations. Using this approach, one can describe the optimal static allocation with a set of highly nonlinear, distribution-dependent optimality conditions whose solution depends on the unknown parameters of the output distribution. We propose a new methodology that provably learns this solution (asymptotically) and is very computationally efficient, has no tunable parameters, and works for a wide variety of output distributions.
We propose and evaluate novel sampling policies for a Bayesian ranking and selection problem with pairwise comparisons. We introduce the lookahead contraction principle and apply it to three types of value factors for lookahead policies. The resulting lookahead contraction policies are analyzed both with the minimal number of lookahead steps required for obtaining informative value factors, and with fixed number of lookahead steps. We show that lookahead contraction reduces the minimal number of required lookahead steps, and that contraction guarantees finiteness of the minimal lookahead. For minimal lookahead we demonstrate empirically that lookahead contraction never leads to worse performance, and that lookahead contraction policies based on expected value of improvement perform best. For fixed lookahead, we show that all lookahead contraction policies eventually outperform their counterparts without contraction, and that contraction results in a performance boost for policies based on predictive probability of improvement.
We consider multi-objective ranking and selection problems with heteroscedastic noise and correlation between the mean values of alternatives. From a Bayesian perspective, we propose a sequential sampling technique that uses a combination of screening, stochastic kriging metamodels, and hypervolume estimates to decide how to allocate samples. Empirical results show that the proposed method only requires a small fraction of samples compared to the standard EQUAL allocation method, with the exploitation of the correlation structure being the dominant contributor to the improvement.
This paper considers the problem of choosing the best design alternative under a small simulation budget where making inferences about all alternatives from a single observation could enhance the probability of correct selection. We propose a new selection rule exploiting the relative similarity information between pairs of alternatives and show its improvement on selection performance, evaluated by the probability of correct selection, compared to selection based on collected sample averages. We illustrate the effectiveness by applying our selection index on simulated ranking and selection problems using two well-known budget allocation policies.
We present two sequential allocation frameworks for selecting from a set of competing alternatives when the decision maker cares about more than just the simple expected rewards. The frameworks are built on general parametric reward distributions and assume the objective of selection, which we refer to as utility, can be expressed as a function of the governing reward distributional parameters. The first algorithm, which we call utility-based OCBA (UOCBA), uses the delta-technique to find the asymptotic distribution of a utility estimator to establish the asymptotically optimal allocation by solving the corresponding constrained optimization problem. The second, which we refer to as utility-based value of information (UVoI) approach, is a variation of the Bayesian value of information (VoI) techniques for efficient learning of the utility. We establish the asymptotic optimality of both allocation policies and illustrate the performance of the two algorithms through numerical experiments.
We consider the problem of approximating the minimum cost of a finite set of alternative systems.
We can not directly observe the cost of the systems, but we can estimate the cost using simulation.
The simulation run lengths are adaptively chosen for each system. We describe an optimization algorithm and establish a bound on the error convergence rate. Compared with a single system, the error grows by an additional factor of the square root of the logarithm of the number of systems and the simulation budget.
We propose a fully sequential experimental design procedure for stochastic kriging (SK) methodology of
fitting unknown response surfaces from simulation experiments. The procedure first estimates the current
SK model performance by jackknifing the existing data points. Then, an additional SK model is fitted on
the jackknife error estimates to capture the landscape of the current SK model performance. Methodologies
for balancing exploration and exploitation trade-off in Bayesian optimization are employed to select the
next simulation point. Compared to experimental design procedures, our method is robust to the SK model
specifications. We design a dynamic allocation algorithm, which we call kriging-based dynamic stochastic
kriging (KDSK), and illustrate its performance through two numerical experiments.
cORe is a new cyber-infrastructure which will facilitate computational Operations Research exchange. OR models arise in many engineering domains, such as design, manufacturing, and services (e.g., banking/finance, health systems), as well as specific infrastructure-centric applications such as logistics/supply chains, power system operations, telecommunications, traffic/ transportation, and many more. In addition, modern OR tools have also been adopted in many foundational disciplines, such as computer science, machine learning, and others. Given the broad footprint of OR, the development of a robust cyber-infrastructure has the potential to not only promote greater exchange of data, models, software, and experiments but also enhance reproducibility and re-usability, both within OR, and across multiple disciplines mentioned above. cORe also has the potential to drastically reduce the computational burden on research communities which study resource allocation using analytics. This paper presents an overview of the functionality, design, and computations using cORe.
We describe major improvements to the testing capabilities of SimOpt, a library of simulation-optimization problems and solvers. Foremost among these improvements is a transition to GitHub that makes SimOpt easier to use and maintain. We also design two new wrapper functions that facilitate empirical comparisons of solvers. The wrapper functions make extensive use of common random numbers (CRN) both within and across solvers for various purposes; e.g., identifying random initial solutions and running simulation replications. We examine some of the intricacies of using CRN to compare simulation-optimization solvers.
We consider a recently proposed simulation-based decision-making framework, called offline-simulationonline-
application (OSOA). In this framework, simulation experiments are not performed after the target
problem is set up with all the input parameters; instead, they are performed before that with all the possible
parameters that might come up in the target problem. Then, these computational results are directly used
in real time for system evaluation or system optimization when the input parameters of the target problem
are known. In this paper, we follow this framework and use stochastic kriging (SK) to model the system
performance from the covariate space. Two measures, namely IMSE and IPFS, are proposed to evaluate the
prediction errors for system evaluation and system optimization respectively. We establish the convergence
rates of these two measures. They quantify the magnitude of prediction errors for the online application
after a certain period of time is spent for the offline simulation.
In Bayesian feasibility determination, a typical reward function is either the 0-1 or linear reward function. We propose a new type of reward function for Bayesian feasibility determination. Our proposed reward function emphasizes the importance of barely feasible/infeasible systems whose mean performance measures are close to the threshold. There are two main reasons why the barely feasible/infeasible systems are more important. First, the overall accuracy on solving a feasibility determination problem is heavily affected by those difficult systems. Second, if the decision maker wants to further find the best feasible system, it is likely that one of the barely feasible/infeasible systems is the best feasible. We derive a feasibility determination procedure with the new reward function in a Bayesian framework. Our experiments show that the Bayesian optimal procedure with the new reward function performs the best in making correct decisions on difficult systems when compared to existing procedures.
Bayesian Simulation Optimization with Common Random Numbers
Michael Pearce (University of Warwick, Uber AI Labs); Matthias Poloczek (Uber AI Labs, University of Arizona); and Juergen Branke (Warwick Business School)
We consider the problem of stochastic simulation optimization with common random numbers over a
numerical search domain. We propose the Knowledge Gradient for Common Random Numbers (KG-CRN)
sequential sampling algorithm, a simple elegant modification to the Knowledge Gradient that incorporates
the use of correlated noise in simulation outputs with Gaussian Process meta-models. We compare this
method against the standard Knowledge Gradient and a more recently proposed variation that allows
for pairwise sampling. Our method significantly outperforms both baselines under identical laboratory
conditions while greatly reducing computational cost compared to pairwise sampling.
Deterministic and Discrete Simulation Optimization
Chair: Logan Mathesen (Arizona State University)
A New Partition-based Random Search Method for Deterministic Optimization Problems
Ziwei Lin (Shanghai Jiao Tong University, Politecnico di Milano); Andrea Matta (Politecnico di Milano); and Shichang Du (Shanghai Jiao Tong University)
The Nested Partition (NP) method is efficient in large-scale optimization problems. The most promising region is identified and partitioned iteratively. To guarantee the global convergence, a backtracking mechanism is introduced. Nevertheless, if inappropriate partitioning rules are used, lots of backtracking occur reducing largely the algorithm efficiency. A new partition-based random search method is developed in this paper. In the proposed method, all generated regions are stored for further partitioning and each region has a partition speed related to its posterior probability of being the most promising region. Promising regions have higher partition speeds while non-promising regions are partitioned slowly. The numerical results show that the proposed method finds the global optimum faster than the pure NP method if numerous high-quality local optima exist. It can also find all the identical global optima, if exist, in the studied case.
We introduce the R-MGSPLINE (Retrospective Multi-Gradient Search with Piecewise Linear Interpolation and Neighborhood Enumeration) algorithm for finding a local efficient point when solving a multi-objective simulation optimization problem on an integer lattice. In this nonlinear optimization problem, each objective can only be observed with stochastic error and the decision variables are integer-valued. R-MGSPLINE uses a retrospective approximation (RA) framework to repeatedly call the MGSPLINE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration. The MGSPLINE algorithm performs a line search along a common descent direction constructed from pseudo-gradients of each objective, followed by a neighborhood enumeration for certification. Numerical experiments demonstrate R-MGSPLINE's empirical convergence to a local weakly efficient point.
Global optimization techniques often suffer the curse of dimensionality. In an attempt to face this challenge, high dimensional search techniques try to identify and leverage upon the effective, lower, dimensionality of the problem either in the original or in a transformed space. As a result, algorithms search for and exploit a projection or create a random embedding. Our approach avoids modeling of high dimensional spaces, and the assumption of low effective dimensionality. We argue that effectively high dimensional functions can be recursively optimized over sets of complementary lower dimensional subspaces. In this light, we propose the novel Subspace COmmunication for OPtimization (SCOOP) algorithm, which enables intelligent information sharing among subspaces such that each subspace guides the other towards improved locations. The experiments show that the accuracy of SCOOP rivals the state-of-the-art global optimization techniques, while being several orders of magnitude faster and having better scalability against the problem dimensionality.
We consider the question of identifying the set of all solutions to a system of nonlinear equations, when
the functions involved in the system can only be observed through a stochastic simulation. Such problems
frequently arise as first order necessary conditions in global simulation optimization problems. A convenient
method of "solving" such problems involves generating (using a fixed sample) a sample-path approximation
of the functions involved, and then executing a convergent root-finding algorithm with several random
restarts. The various solutions obtained thus are then gathered to form the estimator of the true set. We
investigate the quality of the returned set in terms of the expected Hausdorff distance between the returned
and true sets. Our message is that a certain simple logarithmic relationship between the sample size and
the number of random restarts ensures maximal efficiency.
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping
over a closed and convex set. In strongly monotone regimes, we present a variable sample-size averaging
scheme (VS-Ave) that achieves a linear rate with an optimal oracle complexity. In addition, the iteration
complexity is shown to display a muted dependence on the condition number compared with standard
variance-reduced projection schemes. To contend with merely monotone maps, we develop amongst the first
proximal-point algorithms with variable sample-sizes (PPAWSS), where increasingly accurate solutions of
strongly monotone SVIs are obtained via (VS-Ave) at every step. This allows for achieving a sublinear
convergence rate that matches that obtained for deterministic monotone VIs. Preliminary numerical evidence
suggests that the schemes compares well with competing schemes.
Adaptive Sampling Trust-Region Optimization (ASTRO) is a class of derivative-based stochastic trustregion
algorithms developed to solve stochastic unconstrained optimization problems where the objective
function and its gradient are observable only through a noisy oracle or using a large dataset. ASTRO
incorporates adaptively sampled function and gradient estimates within a trust-region framework to generate
iterates that are guaranteed to converge almost surely to a first-order or a second-order critical point of
the objective function. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure
that the objective function and its gradient are sampled only to the extent needed, so that small sample
sizes result when iterates are far from a critical point and large sample sizes result when iterates are near
a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We describe ASTRO in detail, give a
sense of its theoretical guarantees, and report extensive numerical results.
This paper presents a framework that was developed to achieve data-driven robust optimization of processes. The main idea is to automatically build a single and global predictive model using a machine learning technique (random forests), and then to use a derivative-free black-box optimization technique (MADS) to maximize a performance criterion. Monte-Carlo simulation is used to estimate pointwise prediction variance. This automated framework is designed to find optimal variance-stabilizing solutions while preserving the noisiness, non-smoothness, non-linearity, non-convexity and disjoint nature of real-life data. The time it takes to prepare the dataset depends on the volume of data, but this step occurs only once in the procedure. The time it takes to iterate during optimization depends on three user-specified quantities; therefore, the iterative portion is insensitive to data volume. Implementation was done using the R programming language which offers a wide variety of data processing, modelling and optimization capabilities.
Optimal Computing Budget Allocation for Binary Classification with Noisy Labels and Its Applications on Simulation Analytics
Weizhi Liu and Haobin Li (National University of Singapore), Hui Xiao (Southwestern University of Finance and Economics), and Loo Hay Lee and Ek Peng Chew (National University of Singapore)
In this study, we consider the budget allocation problem for binary classification with noisy labels. The classification accuracy can be improved by reducing the label noises which can be achieved by observing multiple independent observations of the labels. Hence, an efficient budget allocation strategy is needed to reduce the label noise and meanwhile guarantees a promising classification accuracy. Two problem settings are investigated in this work. One assumes that we do not know the underlying classification structures and labels can only be determined by comparing the sample average of its Bernoulli success probability with a given threshold. The other case assumes that data points with different labels can be separated by a hyperplane. For both cases, the closed-form optimal budget allocation strategies are developed. A simulation analytics example is used to demonstrate how the budget is allocated to different scenarios to further improve the learning of optimal decision functions.
We consider a simulation optimization problem whose objective function is defined as the expectation of a simulation output based on a continuous decision variable, where the parameters of the simulation input distributions are estimated based on independent and identically distributed streaming data
from a real-world system. Finite-sample error in the input parameter estimates causes input uncertainty in the simulation output, which decreases as the data size increases. By viewing the problem through the lens of misspecified stochastic optimization, we develop a stochastic approximation (SA) framework to solve a sequence of problems defined by the sequence of input parameter estimates to increasing levels of exactness. Under suitable assumptions, we observe that the error in the SA solution diminishes to zero in expectation and propose a SA sampling scheme so that the resulting solution iterates converge to the optimal solution under the real-world input distribution at the best possible rate.
Weakly coupled Markov decision processes (MDPs) are stochastic dynamic programs where decisions in independent sub-MDPs are linked via constraints. Their exact solution is computationally intractable. Numerical experiments have shown that Lagrangian relaxation can be an effective approximation technique. This paper considers two classes of weakly coupled MDPs with imperfect information. In the first case, the transition probabilities for each sub-MDP are characterized by parameters whose values are unknown. This yields a Bayes-adaptive weakly coupled MDP. In the second case, the decision-maker cannot observe the actual state and instead receives a noisy signal. This yields a weakly coupled partially observable MDP. Computationally tractable approximate dynamic programming methods combining semi-stochastic certainty equivalent control or Thompson sampling with Lagrangian relaxation are proposed. These methods are applied to a class of stochastic dynamic resource allocation problems and to restless multi-armed bandit problems with partially observable states. Insights are drawn from numerical experiments.
We consider optimization with uncertain or probabilistic constraints under the availability of limited data or Monte Carlo samples. In this situation, the obtained solutions are subject to statistical noises that affect both the feasibility and the objective performance. To guarantee feasibility, common approaches in data-driven optimization impose constraint reformulations that are "safe" enough to ensure solution feasibility with high confidence. Often times, selecting this safety margin relies on loose statistical estimates, in turn leading to overly conservative and suboptimal solutions. We propose a validation-based framework to balance the feasibility-optimality tradeoff more efficiently, by leveraging the typical low-dimensional structure of solution paths in these data-driven reformulations instead of estimates based on the whole decision space utilized by past approaches. We demonstrate how our approach can lead to a feasible solution with less conservative safety adjustment and confidence guarantees.
Simulation-optimization problems exhibit substantial inefficiencies when applied to high-dimensional problems. The problem is exacerbated in the case where feasibility also needs to be evaluated using simulation. In this work, we propose an approximate iterative approach to identify feasible solutions and quickly find good solutions to the original problem. The approach is based on discrete event optimization (i.e., a mathematical programming representation of the simulation-optimization problems) and Benders decomposition, which is used for cut generation while a system alternative is simulated. The procedure is currently tailored for the server allocation problem in the multi-stage serial-parallel manufacturing line constrained to a target system time on a specific sample path. Results on randomly generated instances show its effectiveness in quickly eliminating infeasible solutions, thus decreasing the required computational effort and keeping the optimality gap low.
We study the problem of coordination control of multiple traffic signals to mitigate traffic congestion. The parameters we optimize are the coordination pattern and offsets. A coordination pattern indicates which traffic signals are coordinated. Offsets show how traffic signals can be coordinated. We aim at finding the optimal combination of the coordination pattern and offsets. In this paper, we treat it as an optimization problem whose search space is a conditional space with hierarchical relationships; the coordination pattern determines the controllability of the offsets. Then, we tackle the problem by proposing a novel method built upon Bayesian Optimization, called BACH. Experiments demonstrate that BACH successfully optimizes coordination control of traffic signals and BACH outperforms various state-of-the-art approaches in the literature of traffic signal control and Bayesian Optimization in terms of best parameters found by these methods with a fixed budget.
We study the staffing and shift scheduling problem in multi-skill multi-channel contact centers, containing calls, emails and chats. Due to the fact that each channel has its own operating characteristics, the existing solutions developed for multi-skill call centers are not applicable to our problem. In this paper, we first build a high fidelity simulation model at a weekly level to evaluate various Quality of Service (QoS) measurements for a given schedule. Then we propose a simulation-based optimization algorithm to solve the staffing and shift scheduling problem integrally to minimize the total costs of agents under certain QoS requirements. In the numerical experiments, we show the effectiveness of the proposed approach with realistic instances.
Traditional optimal power flow describes the system performance only in a single snapshot while the resulting decisions are applied to an entire time period. Therefore, how well the selected snapshot can represent the entire time period is crucial for the effectiveness of optimal power flow. In this paper, period optimal power flow is proposed with a given time period partitioned into consecutive linear time intervals, in which bus voltages and power injections are linear mapping of time. The optimal and operational limits in each time interval are precisely represented by its median and two end snapshots, through which the system performance of the optimal power flow model is significantly improved. Case studies based on simulation of a modified IEEE 118-bus system have demonstrated the simplicity and effectiveness of the proposed model.
Given finite real-world data, input models are estimated with error. Thus, the system performance estimation
uncertainty includes both input and simulation uncertainties. Built on the global sensitivity analysis proposed
by Oakley and O’Hagan, we develop a metamodel-assisted Bayesian framework to quantify the contributions
from simulation and input uncertainties. It further estimates the impact from each source of input uncertainty
and predicts the value of collecting additional input data, which could guide the data collection to efficiently
improve the system response estimation accuracy. The empirical study demonstrates that our approach has
promising performance.
Bootstrapping is a popular tool for quantifying input uncertainty, inflated uncertainty in the simulation output caused by finite-sample estimation error in the input models. Typical bootstrap-based procedures have a nested simulation structure that requires BR simulation runs: the outer loop bootstraps B input distributions, each of which requires R inner simulation runs. In this article, we present a measure-theoretic framework for constructing a sample path likelihood ratio and propose an efficient input uncertainty quantification procedure using two green simulation estimators. The proposed procedures reuse the same R inner simulation outputs in all outer loops by reweighting them using appropriately defined likelihood ratios. Both procedures produce asymptotically valid confidence intervals for the expected simulation output under the true input distribution. Our numerical results show that the proposed procedures have efficiency gains compared to other popular bootstrap-based alternatives.
Simple question: How sensitive is your simulation output to the variance of your simulation input models? Unfortunately, the answer is not simple because the variance of many standard parametric input distributions can achieve the same change in multiple ways as a function of the parameters. In this paper we propose a family of output-mean-with-respect-to-input-variance sensitivity measures and identify two particularly useful members of it. A further benefit of this family is that there is a straightforward estimator of any member with no additional simulation effort beyond the nominal experiment. A numerical example is provided to illustrate the method and interpretation of results.
In stochastic simulation, input uncertainty (IU) is caused by the error in estimating input distributions using finite real-world data. When it comes to simulation-based Ranking and Selection (R&S), ignoring IU can lead to the failure of many existing procedures. In this paper, we study a new version of the fixed confidence R&S problem, where sequential input data can be acquired to reduce IU over time. To solve the problem, we first propose a moving average estimator for online estimation with sequential data. Then, a new procedure is designed by extending a Sequential Elimination framework. As is shown numerically, our procedure can effectively achieve the desired probability of correct selection, but there is plenty of room for improving its efficiency.
Distributionally Robust Optimization (DRO) is a flexible framework for decision making under uncertainty and statistical estimation. For example, recent works in DRO have shown that popular statistical estimators can be interpreted as the solutions of suitable formulated data-driven DRO problems. In turn, this connection is used to optimally select tuning parameters in terms of a principled approach informed by robustness considerations. This paper contributes to this growing literature, connecting DRO and statistics, by showing how boosting algorithms can be studied via DRO. We propose a boosting type algorithm, named DRO-Boosting, as a procedure to solve our DRO formulation. Our DRO-Boosting algorithm recovers Adaptive Boosting (AdaBoost) in particular, thus showing that AdaBoost is effectively solving a DRO problem. We apply our algorithm to a financial dataset on credit card default payment prediction. We find that our approach compares favorably to alternative boosting methods which are widely used in practice.
Data-driven Optimal Transport Cost Selection for Distributionally Robust Optimization
Best Contributed Theoretical Paper - Finalist
Jose Blanchet (Stanford University), Yang Kang (Columbia University), Karthyek Murthy (Singapore University of Technology & Design), and Fan Zhang (Stanford University)
Some recent works showed that several machine learning algorithms, such as square-root Lasso, Support Vector Machines, and regularized logistic regression, among many others, can be represented exactly as distributionally robust optimization (DRO) problems. The distributional uncertainty set is defined as a neighborhood centered at the empirical distribution, and the neighborhood is measured by optimal transport distance. In this paper, we propose a methodology which learns such neighborhood in a natural data-driven way. We show rigorously that our framework encompasses adaptive regularization as a particular case. Moreover, we demonstrate empirically that our proposed methodology is able to improve upon a wide range of popular machine learning estimators.
This research studies the problem of node ranking in a random network. Specifically, we consider a Markov chain with several ergodic classes and unknown transition probabilities which can be estimated by sampling. The objective is to select all of the best nodes in each ergodic class. A sampling procedure is proposed to decompose the Markov chain and maximize a weighted probability of correct selection of the best nodes in each ergodic class. Numerical results demonstrate the efficiency of the proposed sampling procedure.
This paper is concerned with building statistical models for non-stationary input processes with a linear
trend. Under a Poisson assumption, we investigate the use of the maximum likelihood (ML) method to
estimate the model and establish limiting behavior for the ML estimator in an asymptotic regime that
naturally arises in applications with high-volume inputs. We also develop likelihood ratio tests for the
presence of a linear trend and discuss the asymptotic efficiency. Change-point detection procedures are
discussed to identify an unknown point when the model switches from a stationary mode to non-stationarity
with a linear trend. Numerical experiments on an e-commerce data set are included. Incorporating a linear
trend into an input model can improve prediction accuracy and potentially enhance associated performance
evaluations and decision making.
Recent developments are summarized concerning Sequest and Sequem,
sequential procedures for estimating nonextreme and extreme steady-state
quantiles of a simulation output process. The procedures deliver point and
confidence-interval (CI) estimators of a given quantile, where each CI
approximately satisfies given requirements on its coverage probability and
its absolute or relative precision. The public-domain Sequest software now
includes both procedures. The software is applied to a user-supplied
dataset exhibiting warm-up effects, autocorrelation, and a multimodal
marginal distribution. For the simulation analysis method of standardized
time series (STS), we also sketch an elementary proof of a functional
central limit theorem (FCLT) that is needed to develop STS-based
quantile-estimation procedures when the output process satisfies a
conventional density-regularity condition and either (i) a geometric-moment
contraction condition and an FCLT for a related binary process, or (ii)
conventional strong-mixing conditions.
Interactive on-the-fly simulators offer key advantages over traditional simulation tools in all aspects of a simulation project life cycle. Unlike traditional simulators that require constant starting and stopping of the simulation engine to validate and develop models, interactive on-the-fly simulators allow for model development while the simulator is running. This new breed of simulation tools impact model development, model validation, environment optimization, and provide a path to real-time self-modifying simulation models. The interactive component of the new simulators allows for additional uses of the completed model such as employee training, risk mitigation, and real-time optimization. Interactive on-the-fly simulators are being used to improve scheduling, optimize the flow, and provide simulation benefits to less experienced personnel. This new breed of simulation environment can learn the model behavior and dynamically optimize the process flow. They provide all the integration points required for IOT and Industry 4.0. www.createasoft.com.
Discrete-event simulation is important in its own right and is inevitably intertwined with many other forms of analytics. Source data often must be processed before being used in a simulation model. Collection of simulated data needs to coordinate with and support the evaluation of performance metrics. Or you might need to integrate other analytics into a simulation model to capture specific complexities in a modeled system. SAS® Simulation Studio provides an interactive, graphical environment for building, running, and analyzing discrete-event simulation models, and is deeply integrated with other SAS® analytics. We illustrate how SAS Simulation Studio helps you work with simulation models and address related challenges. You have full control over the use of input data and the creation of simulated data. With strong experimental design capabilities, you can simulate for all needed scenarios. Additionally, you can embed any SAS analytic program directly into the execution of your simulation model.
In the last few years, most software has moved “to the cloud” – yet most Excel add-ins for risk analysis and Monte Carlo simulation, as well as conventional and stochastic optimization, are limited to desktop Excel for Windows. An exception is Analytic Solver® Simulation, which has been fully re-engineered as a JavaScript-based “Office add-in” that works in Excel for the Web, Excel for Windows and Excel for Macintosh. Further, Analytic Solver Simulation enables Excel-based risk analysis models to be “translated” into live models that run in cloud-based business intelligence tools like Tableau and Power BI, or into a form usable in a custom web or mobile application, executing simulations and optimizations in the cloud via Frontline’s RASON® Analytics API. This Vendor Tutorial session at WSC 2019 will demonstrate the range of capabilities available to users of Analytic Solver and RASON software.
Uncertainties are typically expressed in terms of probability distributions represented as cumulative or density functions. In general, it is impossible to perform arithmetic with such representations. Suppose soybean yield is normally distributed with a mean and standard deviation of 200 and 30 bushels/acre, respectively. And suppose the price per bushel is lognormal with a median and 90th percentile of $9 and $12 respectively. What is the resulting distribution of revenue? The Open SIPmath Standard(TM) represents such uncertainties as vectors of simulated or historical realizations and metadata called Stochastic Information Packets (SIPs). SIPs have similar group properties to numbers, so the SIP of revenue is merely the SIP of yield multiplied element by element times the SIP of price. SIPs are platform agnostic, and are at home in R and Python. But the Data Table function in Excel can also perform calculations on SIPs using the same keystrokes used for numbers.
The Optimization Firm will present its new TOF-SNOBFIT software, a high-performance Fortran implementation of the SNOBFIT algorithm by Huyer and Neumaier (2008). The software makes it possible to solve box-constrained global optimization problems using only function values from simulations or experiments. This talk will describe the main algorithmic features of TOF-SNOBFIT, illustrate its use, and present extensive computational results with hundreds of benchmarks and various applications. The computational experiments were designed to measure and analyze CPU time requirements, number of calls to the simulator, and the effect of starting point and number of iterations on solution quality.
MOZART® has been used in major memory semiconductor manufacturers and display panel makers for more than 15 years. The simulation-based planning and scheduling system has weekly planning (MP: master plan), daily planning (FP: factory plan), real-time scheduling (APS: advanced planning and scheduling), lot pegging (RTF: return to forecast), and what-if simulation (LSE: loading simulation engine) modules. VMS recently launched a new module named MOZART WISE (What-If Simulation Environment), which is the enhanced version of LSE. WISE consists of experiment designer, executer, and analyzer. We will introduce how to plan scenarios with the designer, and how to analyze the results with the analyzer. We will also present the weight factor optimization result with machine learning as an example.
The AutoSched product from Applied Materials has been developed to accurately simulate the complexities required by production managers needed to meet customer commitments and improve the utilization of critical resources. Recent gains in the integration between AutoSched and other Applied Material products such as APF RTD provide opportunities for even greater returns for facility productivity initiatives. These advances have made AutoSched one of the most widely used simulation packages for modeling complex high-tech manufacturing operations in the world today.
Simulation has traditionally been applied in system design projects where the basic objective is to evaluate alternatives and predict and improve the long term system performance. In this role, simulation has become a standard business tool with many documented success stories. Beyond these traditional system design applications, simulation can also play a powerful role in scheduling by predicting and improving the short term performance of a system.
In the manufacturing context, the major new trend is towards digitally connected factories that introduce a number of unique requirements which traditional simulation tools do not address. Simio has been designed from the ground up with a focus on both traditional applications as well as advanced scheduling, with the basic idea that a single Simio model can serve both purposes. In this paper we will focus on the application of Simio simulation in the Industry 4.0 environment.
This paper describes the SimioTM modeling system that is designed to simplify model building by promoting a modeling paradigm shift from the process orientation to an object orientation. Simio is a simulation modeling framework based on intelligent objects. The intelligent objects are built by modelers and then may be reused in multiple modeling projects. Although the Simio framework is focused on object-based modeling, it also supports a seamless use of multiple modeling paradigms including event, process, object, systems dynamics, agent-based modeling, and Risk-based Planning and scheduling (RPS).
Simulink, a model-based design environment, is often used in the design, simulation, testing, and building of physical or software systems. These systems often involve a number of different, interconnected components with a high amount of complexity. To simulate these models, a number of different types of solvers can be employed, which the product characterizes as fixed-step, variable-step, continuous, discrete and discrete-event. To simulate a model with multiple solvers, co-simulation can be employed. Co-simulation and parallel simulation capabilities in Simulink, along with the visualization and analysis capabilities of MATLAB, allow one to generate a more complete simulation and better understand the results. Simulink also allows for multiple 3rd-party tool connections and hand-integration, along with co-simulation. The product can also leverage parallel techniques to scale-up and improve performance of experiments with multiple simulations. MATLAB and Simulink have developed technology and strategies for integrating simulations for these situations, which will be discussed.
FlexSim is excited to unveil our brand-new healthcare simulation module at the Winter Simulation Conference! FlexSim Healthcare (HC) was released in 2009 and has helped numerous healthcare systems and facilities to model and analyze patient flow activities for the past decade, particularly in the areas of patient waiting time and staff/equipment utilization. Now, on the 10th anniversary of its release, we’ll be showcasing its successor—a powerful, modern module that was designed to be even more capable, easy-to-use, and visually impressive. This version allows for even more granular control in modeling the specifics of complex healthcare systems without resorting to code, with myriad new features ranging from enhanced database connectivity to virtual reality, improved data and reporting to automated travel networks and much more.
Many people still believe that simulation is largely an exercise in computer programming when, in fact, programming should only represent 25 to 50 percent of the work in a sound simulation project. In this talk we discuss and give solutions for what are arguably the three major methodological issues that need to be discussed in any successful simulation study, namely, validation of the model, selecting input probability distributions, and the design and analysis of simulation experiments. For model validation we give the most-important techniques to help ensure validity. In the case of input modeling we demonstrate software than can select an appropriate distribution both when system data are available and otherwise. Finally, for simulation output-data analysis we discuss an effective statistical technique that can utilize multi-core processors and cloud computing.
One of the major trends in simulation modeling these days is moving the execution of simulation models to the cloud. We will showcase AnyLogic Cloud – the most advanced cloud solution existing today for simulation – and go through its typical use cases, including: integration of simulation into custom analytical workflows, scalable high performance computing, instant delivery of models to clients and users, creation of custom web interfaces for simulation models, and more. We will also quickly go through the AnyLogic Cloud’s open API and introduce AnyLogic Private Cloud – the fully functional product for organizations with strict security guidelines.
In this presentation, we will discuss how you can leverage unique features of AnyLogic simulation software to solve your business challenges. We demonstrate applications of simulation in various domains and how AnyLogic’s industry-specific libraries can help you to easily model complex, real-world systems. Our main focus will be on the powerful capabilities of the Material Handling Library (MHL) that is used to design detailed models of production and storage facilities, in addition to managing material workflows inside four walls. We showcase MHL features such as multi-tier conveyor/transporter networks, path guided and free-space moving transporters (e.g., AGVs), density map for free space transporters, jib and overhead cranes, and lifts.