|
WSC 2006 Abstracts |
Modeling Methodology B Track
Monday 10:30:00 AM 12:00:00 PM
Parallel & Distributed Simulation
I
Chair: Stephen Turner (Nanyang Technological University,
Singapore)
Compiled Code in Distributed Logic
Simulation
Jun Wang and Carl Tropper (McGill University)
Abstract:
A logic simulation approach known as compiled-code
event-driven simulation was developed in the past for sequential logic
simulation. It improves simulation performance by reducing the logic
evaluation and propagation time. In this paper we describe the application of
this approach to distributed logic simulation. Our experimental results show
that using compiled code can greatly improve the stability and overall
performance of a Time-Warp based logic simulator. We also present a technique
called fanout aggregation that makes use of information on circuit partitions
and considerably improves the run-time performance of our (distributed)
compiled code simulator. It does not produce a similar improvement when used
in conjunction with an interpreted simulator because of run-time overhead.
Causality Information and Fossil Collection in
Time Warp Simulations
Malolan Chetlur (AT&T) and Philip A
Wilsey (University of Cincinnati)
Abstract:
This paper presents a Time Warp fossil collection
mechanism that functions without need for a GVT estimation algorithm.
Effectively each Logical Process (LP) collects causality information during
normal event execution and then each LP utilizes this information to identify
fossils. In this mechanism, LPs use constant size vectors (that are
independent of the total number of parallel simulation objects) as timestamps
called Plausible Total Clocks to disseminate causality information. For proper
operation, this mechanism requires that the communication layer preserves a
FIFO ordering on messages. A detailed description of this new fossil
collection mechanism and its proof of correctness is presented in this paper.
Eliminating Remote Message Passing in Optimistic
Simulation
David Bauer (The MITRE Corporation) and Christopher D
Carothers (Rensselaer Polytechnic Institute)
Abstract:
This paper introduces an algorithm for parallel
simulation capable of executing the critical path without a priori knowledge
of the model being executed. This algorithm is founded on the observation that
each initial event in a model causes a stream of events to be generated for
execution. By focusing on the parallelization of event streams, rather than
logical processes, we have created a new simulation engine optimized for large
scale models (i.e., models with 1 million LPs or more).
Monday 1:30:00 PM 3:00:00 PM
Parallel & Distributed Simulation
II
Chair: Carl Tropper (McGill University)
Incremental Checkpointing with Application to
Distributed Discrete Event Simulation
Thomas Huining Feng and
Edward A. Lee (Center for Hybrid and Embedded Software Systems (CHESS))
Abstract:
Checkpointing is widely used in robust fault-tolerant
applications. We present an efficient incremental checkpointing mechanism. It
requires to record only the state changes and not the complete state. After
the creation of a checkpoint, state changes are logged incrementally as
records in memory, with which an application can spontaneously roll back
later. This incrementalism allows us to implement checkpointing with high
performance. Only small constant time is required for checkpoint creation and
state recording. Rollback requires linear time in the number of recorded state
changes, which is bounded by the number of state variables times the number of
checkpoints. We implement a Java source transformer that automatically
converts an existing application into a behavior-preserving one with
checkpointing functionality. This transformation is application-independent
and application-transparent. A wide range of applications can benefit from
this technique. Currently, it has been used for distributed discrete event
simulation using the Time Warp technique.
Performance Evaluation of a CMB
Protocol
Célia Leiko Ogawa Kawabata (Centro Universitário Central
Paulista), Regina Helena Carlucci Santana, Marcos José Santana, and Sarita
Mazzini Bruschi (ICMC/USP) and Kalinka Regina Lucas Jaquie Castelo Branco
(Centro Universitário Eurípedes de Marília)
Abstract:
This paper presents the performance evaluation of a CMB
(Chandy-Misra-Bryant) protocol from the perspective of execution time. The
performance of each logical process in simulation is measured. Our evaluation
shows that logical processes can have different behaviors and different
protocols can be used simultaneously in simulations. While some logical
processes may perform well using conservative protocols, others can use
optimistic protocols because otherwise most of the time these processes would
be blocked unnecessarily. In order to analyze the behavior of the simulations
some models were simulated using a CMB implementation called ParSMPLX. These
models showed that each logical process of a simulation has a different
behavior that makes it more suitable for a specific protocol, increasing the
performance.
Efficient Parallel Queuing System
Simulation
Tobias Kiesling (Universität der Bundeswehr München) and
Thomas Krieger (Institut für Technik Intelligenter Systeme, Universität der
Bundeswehr München )
Abstract:
Queuing systems are an important building block for
performance evaluation in various application areas, due to their powerful,
yet simple nature. Although it is often possible to perform an analytical
evaluation of a queuing model, simulation of queuing systems remains an
important technique in the context of performance evaluation. In order to
speed up queuing simulation executions, parallel and distributed simulation
techniques have been devised. Unfortunately, existing methods are complex in
nature, leading to increased development costs. Moreover, most of these
approaches have been developed for tightly coupled parallel processing
machines. Consequently, they are not suited for a distributed computing
environment. This paper investigates an alternative approach based on the
technique of time-parallel simulation with fix-up computations. The salient
features of this novel approach are its simplicity and its suitability for
execution in a distributed environment.
Monday 3:30:00 PM 5:00:00 PM
Distributed Simulation and the High Level
Architecture
Chair: Tobias Kiesling (International Computer Science
Institute)
A Distributed Simulation Approach for Modeling and
Analyzing Systems of Systems
Abeer Tarief Sharawi, Serge N.
Sala-Diakanda, Adam Dalton, Sergio Quijada, Nabeel Yousef, Jose Sepulveda, and
Luis Rabelo (University of Central Florida)
Abstract:
Certain business objectives cannot be met without the
interaction and communication between different systems. An interesting
concept called system of systems (SoS), which aims to describe this
interaction between systems has been gaining attention in the last few years.
In this paper an extensive review of the literature is performed to capture
the main characteristics associated to this concept in order to propose a new,
more complete definition. This paper also proposes the use of distributed
simulation through the High Level Architecture (HLA) rules to model and
simulate systems of systems. We illustrate our idea with two different
examples: a simplified supply chain network of a computer assembly and an
aircraft initial sizing scenarios. The paper concludes with a discussion of
some of the significant advantages distributed simulation could offer over
traditional simulation for the analysis of such complex systems.
Development of a Runtime Infrastructure for
Large-Scale Distributed Simulations
Buquan Liu, Yiping Yao, Jing
Tao, and Huaiming Wang (School of Computer, National University of Defense
Technology)
Abstract:
With the development of distributed modeling and
simulation, it is necessary for the RTI to support large-scale applications.
However, many RTIs can not support large-scale distributed simulations with
more than 100 federates very well nowadays. StarLink+ is an RTI developed
according to the IEEE 1516 standard, which can be used for large-scale
simulations with thousands of federates. Great innovations are made in
StarLink+, such as its architecture and inner implementation technologies.
This paper presents the two-level architecture in StarLink+. The unique
architecture has the advantages of both central architecture and distributed
architecture. To improve the performance much more for large-scale
simulations, two important technologies, i.e. multiple threads and data
packing, are adopted in StarLink+. In addition, this paper explains the
efficient advancing mechanism in time management and discusses the large-scale
experiments with thousands of federates in StarLink+.
Implementation of Time Management in a Runtime
Infrastructure
Buquan Liu, Yiping Yao, Jing Tao, and Huaimin Wang
(School of Computer, National University of Defense Technology)
Abstract:
The High Level Architecture (HLA) time management is
concerned with mechanisms for guaranteeing message order, process
synchronization and execution correctness in distributed simulations. Time
management greatly influences on the scales of applications, especially for
the computation of Greatest Available Logical Time (GALT) and the
implementation of optimistic services. StarLink is an RTI with central
architecture, which is compliant with the IEEE 1516 standard. This paper
systematically describes the implementation algorithms for main time
management services in StarLink. Two smart and efficient algorithms about GALT
computation and optimistic services are also introduced, which are suitable
for many RTIs such as RTI1.3-NG, pRTI and DRTI. For the GALT algorithm, it is
not necessary for an RTI to resolve the recursion nor any deadlock. For
optimistic services, a simple mechanism without rollback in an RTI is also
introduced; therefore, it can greatly simplify the development of an RTI.
Tuesday 8:30:00 AM 10:00:00 AM
Distributed Simulation in Industry
Chair: Steffen Strassburger (Fraunhofer Institute for Factory Operation and
Automation)
Distributed Simulation in Industry – A Survey, Part 1
– the COTS Vendors
Csaba Attila Boer (TBA BV), Arie de Bruin (Delft
University of Technology, Faculty of Electrical Engineering, Mathematics and
Computer Science) and Alexander Verbraeck (Delft University of Technology,
Faculty of Technology, Policy and Management)
Abstract:
Distributed simulation is used very little in industry,
especially when compared with the interest in distributed simulation from
research and from the military domain. In order to answer the question why
industry lags behind, the authors have carried out an extensive survey, using
a questionnaire and interviews, with users, vendors, and developers of
distributed simulation products, as well as with vendors of non-distributed
simulation software. Based on the results the discrepancies between the
different “worlds” become clear enough to enable the formulation of clear
guidelines for further developments of standards for distributed simulation.
This paper reports on the first part of the survey, namely a questionnaire
targeted at vendors of commercial-off-the-shelf (COTS) simulation packages.
Analysis of the answers obtained establish that it is indeed the case that
industry is relatively underdeveloped in the area of distributed simulation
and also sheds some light on the reasons behind this.
Distributed Simulation in Industry – a Survey, Part 2
– Experts on Distributed Simulation
Csaba Attila Boer (TBA BV),
Arie de Bruin (Delft University of Technology, Faculty of Electrical
Engineering, Mathematics and Computer Science) and Alexander Verbraeck (Delft
University of Technology, Faculty of Technology, Policy and Management)
Abstract:
Distributed simulation is used very little in industry,
especially when compared with the interest in distributed simulation from
research and from the military domain. In order to answer the question why
industry lags behind, the authors have carried out an extensive survey, using
a questionnaire and interviews, with users, vendors, and developers of
distributed simulation products, as well as with vendors of non-distributed
simulation software. This paper reports on the second part of the survey,
namely a series of open ended interviews. We report on the responses we
obtained indicating the discrepancies between the different “worlds”. A
categorization of these responses is given using which it is possible to
formulate clear guidelines for further developments of standards for
distributed simulation.
Optimistic-conservative Synchronization in
Distributed Factory Simulation
Leon McGinnis and Sheng Xu (Georgia
Institute of Technology)
Abstract:
Distributed simulation is attractive for modeling
complicated manufacturing systems having many tools and products, such as a
semiconductor wafer fabrication line. However, conservative synchronization
approaches can introduce excessive overhead in execution, and result in little
parallelism, which can eliminate the speedup promised by distributed
simulation. Our experiences in building a distributed simulation model for
300mm wafer fab using the High Level Architecture (HLA) shows that using model
specific information in a novel adaptation of conservative synchronization can
achieve very significant reduction in model execution time. This paper defines
the time-chop problem for which this adaptation is effective, and formally
develops our optimistic-conservative synchronization scheme.
Tuesday 10:30:00 AM 12:00:00 PM
Interoperability and Composability
Chair: Boon Gan (Singapore Institute of Manufacturing Technology)
Engineering ab Initio Dynamic Interoperability and
Composability Via Agent-mediated Introspective Simulation
Levent
Yilmaz (Auburn University) and Andreas Tolk (Virginia Modeling Analysis and
Simulation Center)
Abstract:
Complex software intensive simulation systems must
respond to changing technology, environments, and requirements. Hence, dynamic
extensibility and adaptability is a significant concern in an application
domain. However, existing interoperability and composability solutions are
limited in dealing with dynamically evolving content needs of existing
simulations and run-time inclusion of new components into a simulation system.
Simulations that are dynamically extensible, while being interoperable require
principled designs that facilitate engineering extensibility,
interoperability, and composability in the first place. We propose and examine
the utility of a strategy based on an agent-mediated meta-simulation
architecture.
Composing Simulations From XML-Specified Model
Components
Mathias Röhl and Adelinde M. Uhrmacher (University of
Rostock)
Abstract:
This paper is about the flexible composition of
efficient simulation models. It presents the realization of a component
framework that can be added as an additional layer on top of simulation
systems. It builds upon platform independent specifications of components in
XML to evaluate dependency relationships and parameters during composition.
The process of composition is split up into four stages. Starting from XML
documents component instances are created. These can be customized and
arranged to form a composition. Finally, a composition is transformed to an
executable simulation model. The first three stages are general applicable to
simulation systems; the last one depends on the Parallel DEVS formalism and
the simulation system James II.
A Domain-specific Language for Model
Coupling
Tom Bulatewicz and Janice Cuny (University of Oregon)
Abstract:
There is an increasing need for the comprehensive
simulation of complex, dynamic, physical systems. Often such simulations are
built by coupling existing, component models so that their concurrent
simulations affect each other. The process of model coupling is, however, a
non-trivial task that is not adequately supported by existing frameworks. To
provide better support, we have developed an approach to model coupling that
uses high level model interfaces called Potential Coupling Interfaces. In this
work, we present a visual, domain-specific language for model coupling, called
the Coupling Description Language, based on these interfaces. We show that it
supports the resolution of model incompatibilities and allows for the
fast-prototyping of coupled models.
Tuesday 1:30:00 PM 3:00:00 PM
COTS Simulation Package Interoperability
Standards I
Chair: Simon Taylor (Brunel
University)
Developing Interoperability Standards for
Distributed Simulation and COTS Simulation Packages with the CSPI
PDG
Simon J. E. Taylor (Centre for Applied Simulation Modelling),
Stephen J Turner, Malcolm Yoke Hean Low, and Xiaoguang Wang (Nanyang
Technological University), Steffen Strassburger (Fraunhofer Institute for
Factory) and John Ladbrook (Ford Motor Company)
Abstract:
For many years discrete-event simulation has been used
to analyze production and logistics problems in manufacturing and defense. In
the early 1980s, visual interactive modelling environments were created that
supported the development, experimentation and visualization of simulation
models. Today these environments are termed Commercial-off-the-shelf
Simulation Packages (CSPs). With the advent of distributed simulation and,
later, the High Level Architecture, the possibility existed to link together
these CSPs and their models to simulate larger problems within enterprises
(e.g. multiple production lines) and across supply chains. However, the
problem of standardizing the use of the HLA and its constituent parts in this
domain exists. The solution of this problem is the work of the CSP
Interoperability Product Development Group (CSPI PDG). The purpose of this
paper is to introduce the CSPI PDG and to review the suite of standards
proposed by the group and current progress.
The Road to COTS-Interoperability: From
Generic HLA-Interfaces Towards Plug-and-Play Capabilities
Steffen
Strassburger (Fraunhofer IFF)
Abstract:
Interoperability between commercial-off-the-shelf
(COTS) simulation packages is a topic which has been discussed for many years
without a solution. With the advent of the High Level Architecture for
Modeling and Simulation (HLA) for the first time a real industry standard has
been made available which promises interoperability for a wide range of
simulation systems and applications. Successful attempts to integrate HLA
interfaces into different simulation packages have been made in the past.
However, these interfaces typically place a significant overhead on the
simulation developer. Also, as often a generic HLA interface is provided,
different HLA interfaces for different simulation packages are not necessarily
interoperable per se, as there are different possible ways to use HLA for the
same task. This article addresses these issues and discusses interoperability
solutions based on and beyond of HLA. It further investigates the
interoperability reference solutions put forward by the COTS simulation
package integration forum.
About the Need for Distributed Simulation
Technology for the Resolution of Real-world Manufacturing and Logistics
Problems
Peter Lendermann (Singapore Institute of Manufacturing
Technology)
Abstract:
Distributed simulation has undergone several cycles of
ups and downs in recent years. Although successful in the military domain, it
appears that the idea of applying distributed simulation in other fields for
modeling and analysis of large-scale, heterogeneous systems such as
communication networks or supply chains has still not taken off until today.
Is this because of inherent limitations or lack of applicability as such? Or
is it because of additional research issues that are yet to be resolved to
make distributed simulation applicable? In this paper, the problem is
discussed specifically with regard to the application of distributed
simulation for design, operation and performance enhancement of manufacturing
and logistics systems.
Tuesday 3:30:00 PM 5:00:00 PM
COTS Simulation Package Interoperability
Stantdards II
Chair: Simon Taylor (Brunel
University)
Interoperating Simulations of Automatic Material
Handling Systems and Manufacturing Processes
Boon Ping Gan and Lai
Peng Chan (Singapore Institute of Manufacturing Technology) and Stephen John
Turner (Nanyang Technological University)
Abstract:
To perform a high fidelity simulation study on a 300 mm
wafer fabrication plant, modeling of the manufacturing process (MP) alone is
not sufficient. Inclusion of the automated material handling system (AMHS)
model is necessary due to the high degree of factory automation. But there is
no one tool that is capable of modeling both the AMHS and MP with sufficient
accuracy and granularity. A commercial simulation package such as AutoMod is
usually used to model the AMHS while AutoSched AP is usually used to model the
MP. These packages can be integrated using the supplied interoperation module
but flexibility in optimizing the execution performance for different
simulation models is lacking. In this paper, we present an approach to
interoperation based on the High Level Architecture standard. We note that the
typical characteristics of disparity in the models’ time granularity and
frequent model interactions are the obstacle to good execution performance.
Distributed Simulation with COTS Simulation
Packages: A Case Study in Health Care Supply Chain
Simulation
Navonil Mustafee and Simon J. E. Taylor (Brunel
University) and Korina Katsaliaki and Sally Brailsford (University of
Southampton)
Abstract:
The UK National Blood Service (NBS) is a public funded
body that is responsible for distributing blood and associated products. A
discrete-event simulation of the NBS supply chain in the Southampton area has
been built using the commercial off-the-shelf simulation package (CSP)
Simul8TM. This models the relationship in the health care supply
chain between the NBS Processing, Testing and Issuing (PTI) facility and its
associated hospitals. However, as the number of hospitals increase simulation
run time becomes inconveniently large. Using distributed simulation to try to
solve this problem, researchers have used techniques informed by SISO's CSPI
PDG to create a version of Simul8TM compatible with the High Level
Architecture (HLA). The NBS supply chain model was subsequently divided into
several sub-models, each running in its own copy of Simul8TM.
Experimentation shows that this distributed version performs better than its
standalone, conventional counterpart as the number of hospitals increases.
Reference Models for Supply Chain Design and
Configuration
Markus Rabe, Frank-Walter Jaekel, and Heiko Weinaug
(Fraunhofer IPK)
Abstract:
Today more and more essential processes are conducted
across enterprise borders, inducing additional challenges in terms of
different languages, process types and ontology. Business Process Modelling
(BPM) and Simulation are well-understood methods to analyze and optimize the
processes within an enterprise. However, they can also be used for
cross-organizational application, especially if they are combined with
reference structures. This paper explains techniques which support
cross-enterprise design and configuration based on Reference Models. Thereby,
different approaches such as SCOR, the Integrated Enterprise Modelling (IEM)
and a specific Distributed Simulation Method are used and integrated into a
consistent Reference Model approach. The application of this approach is
illustrated with different projects which each focus on a specific aspect of
the supply chain design and configuration.
Wednesday 8:30:00 AM 10:00:00 AM
Ontology Driven Simulation
Chair: John Miller (University of Georgia)
Using Ontologies for Simulation
Modeling
Perakath Benjamin (Knowledge Based Systems Inc.) and Mukul
Patki, Michael Graul, Madhav Erraguntla, and Kumar Akella (Knowledge Based
System Inc.)
Abstract:
Ontological analysis has been shown to be an effective
first step in the construction of robust knowledge based systems. However, the
modeling and simulation community has not taken advantage of the benefits of
ontology management methods and tools. Moreover, the popularity of semantic
technologies and the semantic web has provided several beneficial
opportunities for the modeling and simulation communities of interest. This
paper describes the role of ontologies in facilitating simulation modeling. It
outlines the technical challenges in distributed simulation modeling and
describes how ontology-based methods may be applied to address these
challenges. The paper concludes by describing an ontology-based solution
framework for simulation modeling and analysis and outlining the benefits of
this solution approach.
An Ontology for Trajectory
Simulation
Umut Durak (TUBITAK-SAGE), Halit Oguztuzun (Dept. of
Computer Engineering, Middle East Technical University) and S. Kemal Ider
(Dept. of Mechanical Engineering, Middle East Technical University)
Abstract:
From the concept exploration for a weapon system to
training simulators, from hardware-in-the-loop simulators to mission planning
tools, trajectory simulations are used throughout the life cycle of a weapon
system. A trajectory simulation can be defined as a computational tool to
calculate the flight path and flight parameters of munitions.There is a wide
span of trajectory simulations differing widely with respect to their
performance and fidelity characteristics, from simple point-mass simulations
to sixseven degrees of freedom hardware-in-the-loop missile simulations. From
our observations, it is a common practice in the industry that developments of
these simulations are carried out as isolated projects although they rely on
the same body of knowledge. We envision an ontology that will capture the
common knowledge in trajectory simulation domain and make domain knowledge
available for reuse. Trajectory Simulation Ontology, dubbed TSONT, is being
developed to realize this vision.
Ontology Based Representations of Simulation
Models Following the Process Interaction World View
Gregory A.
Silver (University of Georgia), Lee W. Lacy (University of Central Florida)
and John A. Miller (University of Georgia)
Abstract:
The Discrete Event Simulation (DES) process interaction
world view describes models that focus on simulated entities that progress
through a series of temporally related activities. DES formalisms and vendor
approaches for representing DES models serve as a basis for developing an open
neutral representation of models that can be encoded into ontologies. This
paper reviews world views, formal foundations, and ontologies as background.
The process for creating ontologies for the process interaction DES domain is
discussed. An approach to ontology based simulation model representation is
presented. Conclusions and recommendations for future work are provided.
Wednesday 10:30:00 AM 12:00:00 PM
Modeling of Distributed Systems
Chair: Hassan Rajaei (Bowling Green State
University)
On the Performance of Inter-Organizational Design
Optimization Systems
Paolo Vercesi (Esteco) and Alberto Bartoli
(DEEI)
Abstract:
Simulation-based design optimization is a key
technology in many industrial sectors. Recent developments in software
technology have opened a novel range of possibilities in this area. It has now
become possible to involve multiple organizations in the simulation of a
candidate design, by composing their respective simulation modules on the
Internet. Thus, it is possible to deploy an inter-organizational design
optimization system, which may be particularly appealing because modern
engineering products are assembled out of smaller blocks developed by
different organizations. In this paper we explore some of the fundamental
performance-related issues involved in such a novel scenario, by analyzing a
variety of options: centralized control vs. distributed control; generation of
new candidate designs one at a time or in batches; communication and
computation performed serially or with time overlap. Our analysis provides
useful insights into the numerous trade-offs involved in the implementation of
inter-organizational design optimization.
Using Java Methods Traces to Automatically
Characterize and Model J2EE Server Applications
Darpan Dinker and
Herb Schwetman (Sun Microsystems, Inc.)
Abstract:
This paper describes a novel framework used to
characterize a J2EE (Java Enterprise Edition) application and develop models
of the application by using Java method tracing in a Java-technology based
application server. Application servers are critical to large-scale, online
servers and serve as middleware to provide secure access to transactional,
legacy and web services. The tracing tool in this framework gives a detailed
and comprehensive view of the sequences of methods invoked as the application
server processes requests. The output of this tool is processed and
automatically summarized into a set of transaction profiles which form the
input for a simulation model of the application server and its related
components. These profiles have proven to be a useful abstraction of the
behavior of the transactions processed by the system. After describing the
tool and the model, the paper provides results of validation runs and
applications of these techniques.
Simulation of Job Scheduling for Small Scale
Clusters
Hassan Rajaei, Mohammad B Dadfar, and Pankaj Joshi
(Bowling Green State University)
Abstract:
Despite the growing popularity of small-scale clusters
built out of off-the-shelf components, there has been little research on how
these small-scale clusters behave under different scheduling policies. Batch
scheduling policies with backfilling provide excellent space-sharing strategy
for parallel jobs. However, as the performances of uniprocessor and symmetric
multiprocessor have improved with time-sharing scheduling strategies, it is
intuitive that the performance of a cluster of PCs with distributed memory may
also improve with time-sharing strategies, or a combination of time-sharing
and space-sharing strategies. Apart from the batch scheduling policies, this
research explores the possibilities of using synchronized time-sharing
scheduling algorithms for clusters. This paper describes simulation of the
Gang scheduling policies on top of an existing batch scheme. The simulation
results indicate that time-sharing scheduler for clusters could exhibit
superior performance over a batch policy.