WSC 2002

WSC 2002 Final Abstracts

Modeling Methodology B Track

Monday 10:30:00 AM 12:00:00 PM
XML-Based Modeling and Simulation

Chair: Paul A. Fishwick (University of Florida)

Meta-Models are Models Too
Hans Vangheluwe (McGill University) and Juan de Lara (Universidad Autónoma de Madrid)

We introduce multi-formalism modelling and meta-modelling to facilitate computer assisted modelling and simulation of complex systems. Formalisms are modelled in their own right, at a meta-level, within an appropriate formalism. This approach is implemented in the interactive tool AToM3. AToM3 is used to describe formalisms commonly used in the simulation of dynamical systems, as well as to generate custom tools to process models expressed in the corresponding formalism. The Finite State Automata (FSA) formalism is used to demonstrate the concepts - in particular, simulation of FSA models. The issue of a neutral model exchange and re-use format is addressed in the context of meta-modelling. Core XML is proposed as a standard external format. Thanks to the power of the meta-modelling approach, DTD, XMLSchema, and XSLT specifications may be replaced by models, externally represented in core XML, in appropriate formalisms (Entity Relationship for syntax and Graph Grammar for transformation respectively).

Web Service Technologies and their Synergy with Simulation
Senthilanand Chandrasekaran, Gregory Silver, John A. Miller, Jorge Cardoso, and Amit P. Sheth (University of Georgia)

The World Wide Web has had a huge influence on the computing field in general as well as simulation in particular (e.g., Web-Based Simulation). A new wave of development based upon XML has started. Two of the most interesting aspects of this development are the Semantic Web and Web Services. This paper examines the synergy between Web service technology and simulation. In one direction, Web service processes can be simulated for the purpose of correcting/improving the design. In the other direction, simulation models/components can be built out of Web services. Work on seemlessly using simulation as a part of Web service composition and process design, as well as, on using Web services to re-build the JSIM Web-based simulation environment is highlighted.

Using XML for Simulation Modeling
Paul A. Fishwick (University of Florida)

XML represents a new way of organizing the World Wide Web, using markup languages. Whereas HTML is used for presentation-specific content, XML builds upon its SGML lineage to separate content from presentation, and provide a semantic labeling for elements that comprise a document. With XML, the concept of "document" is broadened to include an encapsulation of information and knowledge, and not only a flat medium. This suggests that XML can be used for model specification and computer simulation. With this in mind, we have used XML to create two modeling specification languages: MXL and DXL. We begin by overviewing XML, discussing MXL and DXL, and then showing an example of how the languages are employed in the modeling process, and can be used with a variety of presentations.

Monday 1:30:00 PM 3:00:00 PM
Open Source Initiatives for Simulation Software

Chair: Richard A. Kilgore (ThreadTec)

Next Generation Simulation Environments Founded on Open Source Software and XML-Based Standard Interfaces
Thomas Wiedemann (HTW Dresden)

During the 2001 Winter Simulation Conference, the OpenSML-project was presented and started. The OpenSML-project is based on the Simulation Modeling Language (SML™) and is an open source, web-based, multi-language simulation development project guided by a consortium of industrial, academic and government simulation consultants, practitioners and developers. For the simulation community, the open source movement represents an opportunity to improve the quality of common core simulation functions, improve the potential for creating reusable modeling components from those core functions, and improve the ability to merge those components using XML, HLA and other simulation community standards. This paper extends the OpenSML-project by using universal, language independent XML-descriptions and code generators for converting OpenSML-models to programs in Java, VisualBasic or C++. This would be the first time a simulation model could be transferred between different platforms without manual changes.

Multi-Language, Open-Source Modeling Using the Microsoft .Net Architecture
Richard A. Kilgore (OpenSML and ThreadTec, Inc.)

This presentation reports on the opportunities and limitations Microsoft .Net architecture for supporting the development of a common, open-source, multi-language platform for simulation software support. While the paper supporting the presentation focuses on the underlying foundation within the .Net architecture, the conference presentation represents an important milestone in the OpenSML project corresponding to the first release of a common library supporting the C#, VB.Net and Java/J# languages.

A Web-Ready HiMASS: Facilitating Collaborative, Reusable, and Distributed Modeling and Execution of Simulation Models with XML
Thorsten S. Daum (HiMASS) and Robert G. Sargent (Syracuse University)

We investigate the use of XML as an open, cross-platform, and extendable file format for the description of hierarchical simulation models, including their graphical representations, initial model conditions, and model execution algorithms. We present HiMASS-x, an XML-centered suite of software applications that allows for cross-platform, distributed modeling and execution of hierarchical, componentized, and reusable simulation models.

Monday 3:30:00 PM 5:00:00 PM
Improving the Model Development Process

Chair: Richard Nance (Virginia Tech)

Model Testing: Is it Only a Special Case of Software Testing?
C. Michael Overstreet (Old Dominion University)

Effective testing of software is an important concern in the software engineering community. While many techniques regularly used for testing software apply equally well to testing the implementations of simulation models, we believe that testing simulations often raises issues that occur infrequently in other types of software. We believe that many code characteristics that commonly occur in simulation code are precisely those that the software testing community has identified as making testing challenging. We discuss many of the techniques that software engineering community has developed to deal with those features and evaluate their applicability to simulation development.

What Use is Model Reuse: Is There a Crook at the End of the Rainbow?
Ray J. Paul and Simon J.E. Taylor (Brunel University)

The emergence of new technologies in simulation modelling such as the World Wide Web has fostered debate on the reuse of models. In this paper we present a case for model reuse and the pot of gold that it promises. We then discuss model reuse from the viewpoint of simulation modelers who use COTS simulation packages and suggest that model reuse may in fact cost more than developing new models as candidates for reuse as trust must be established through thorough testing. An alternative to this is put forward that suggests that a Grab-and-Glue, Run, Reject, Reply (G2R3) approach is a more appropriate use of model reuse as it emphasizes the intellectual process of problem understanding rather than model correctness as an means to itself.

Expanding our Horizons in Verification, Validation, and Accreditation Research and Practice
Osman Balci, Richard E. Nance, and James D. Arthur (Virginia Tech) and William F. Ormsby (Naval Surface Warfare Center)

Many different types of modeling and simulation (M&S) applications, consisting of a combination of software, hardware, and humanware, are used in dozens of disciplines under diverse objectives including acquisition, analysis, education, entertainment, research, and training. Certification of sufficient accuracy of an M&S application by conducting verification, validation, and accreditation (VV&A) requires multifaceted knowledge and experience, and poses substantial technical and managerial challenges for researchers, practitioners, and managers. The challenges can only be met by using a very broad spectrum of approaches and expanding our horizons in VV&A. This paper presents 13 strategic directions to meet those challenges. The strategic directions provide guidelines for successful VV&A research and practice.

Tuesday 8:30:00 AM 10:00:00 AM
Network Modeling and Simulation

Chair: David M. Nicol (Dartmouth College)

On Standardized Network Topologies for Network Research
George F. Riley (Georgia Institute of Technology)

Simulation has become the evaluation method of choice for many areas of computer networking research. When designing new or revised transport protocols, queuing methods, routing protocols, (just to name a few), a common approach is to create a simulation of a small to moderate scale topology and measure the performance of the new methodology as compared to existing methods. We demonstrate that simulation results using this approach can lead to very misleading, and even incorrect, results. The interaction between the large number of variables in these simulations can lead to results that vary widely from between different simulation topologies. We give empirical evidence showing different conclusions when the same comparisons are done using differing topologies. We argue the need for a standardized taxonomy of simulation topologies that capture a significant and realistic range of values for the various variables that impact the performance of a simulated network.

A Motion Environment for Wireless Communications Systems Simulations
Nathan J. Smith and Trefor J. Delve (Motorola Labs)

We describe the environment and motion systems used in a parallel, discrete event large scale wireless simulator. The simulator is capable of supporting user motion on multiple environment types (different types of streets, buildings etc.) and provides a unified and intuitive interface to users whilst being efficient for the systems that make use of it. This is achieved by making use of a hierarchical environment description. With this approach, users can provide different levels of detail as required, whilst the motion systems have a simple interface to interrogate the environment. As there is a close coupling between the environment and the RF data required by the wireless simulator (which is considerable in size), this too is represented in a hierarchical manner. This allows a more efficient use of system memory with only the data that is required being loaded.

A Scalable Simulator for TinyOS Applications
Luiz Felipe Perrone and David M. Nicol (Dartmouth College)

Large clouds of tiny devices capable of computation, communication and sensing, goal of the Smart Dust project, will soon become a reality. Hardware miniaturization is shrinking devices and research in software is producing applications that allow devices to communicate and cooperate toward a common goal. Success on the software front hinges on the design of algorithms that can scale up with system size. Given that the number of individual cooperating devices will reach high orders of magnitude (hundreds of thousands or even millions), debugging and evaluating the software in such a large system can reap much benefit from simulation. This paper describes the design of a scalable and flexible simulator which allows for the direct execution, at source code level, of applications written for TinyOS, the operating system that executes on Smart Dust. This simulator also provides detailed models for radio signal propagation and node mobility.

Tuesday 10:30:00 AM 12:00:00 PM
Panel Discussion on Distributed Simulation and Industry: Potentials and Pitfalls

Chair: Philip A. Wilsey (University of Cincinnati)

Distributed Simulation and Industry: Potentials and Pitfalls
Simon J. E. Taylor (Brunel University), Agostino Bruzzone (University of Genoa), Richard Fujimoto (Georgia Institute of Technology), Boon Ping Gan (Singapore Institute of Manufacturing Technology), Steffen Straßburger (DaimlerChrysler) and Ray J. Paul (Brunel University)

This panel paper presents the views of five researchers and practitioners of distributed simulation. Collectively we attempt to address what the implications of distributed simulation are for industry. It is hoped that the views contained herein, and the presentations made by the panelists at the 2002 Winter Simulation Conference will raise awareness and stimulate further discussion on the application of dis-tributed simulation methods and technology in an area that is yet to benefit from the arguable economic benefits that this technique promises.

Tuesday 1:30:00 PM 3:00:00 PM
Parallel and Distributed Simulation

Chair: James D. Arthur (Virginia Tech)

Distributed Spatio-Temporal Modeling and Simulation
Thomas Schulze (Otto-von-Guericke University), Andreas Wytzisk and Ingo Simonis (University of Muenster) and Ulrich Raape (Fraunhofer Institute for Factory Operation/Automation)

The objective of upcoming research in the field of geo-processing is to evolve interoperability standards to develop flexible and scalable controlling and simulation services. In order to overcome the limitations of proprietary solutions, efforts have been made to support interoperability among simulation models and geo information systems (GIS). Existing standards in the domain of spatial information and spatial services define geoinformation (GI) in a more or less static way. Though time can be handled as an additional attribute, its representation is not explicitly specified. In contrast, as the standard for distributed heterogeneous simulation, the High Level Architecture (HLA) provides a framework for distributed time-variant simulation processes but HLA is lacking in supporting spatial information. A web-based Distributed spAtio-temporaL Interoperability architecture DALI integrating these initiatives will be presented here. The long term goal of this DALI Architecture is making standardized off-the-shelf GI and simulation services usable for highly specialized simulation and controlling applications.

Managing External Workload with BSP Time Warp
Malcolm Yoke Hean Low (University of Oxford)

This paper describes an extension to the existing BSP Time Warp dynamic load-balancing algorithm to allow the management of interruption from external workload. Experiments carried out on a manufacturing simulation model using different partition strategies with and without interruption from external workload show that significant performance improvement can be achieved with external workload management.

Fast Cell Level ATM Network Simulation
Xiao Zhong-e, Rob Simmonds, and Brian Unger (University of Calgary) and John Cleary (University of Waikato)

This paper presents performance results for cell level ATM network simulations using both sequential and parallel discrete event simulation kernels. Five benchmarks are used to demonstrate the performance of the simulation kernels for different types of model. The results demonstrate that for the type of network models used in the benchmarks, the TasKit simulation kernel is able to outperform all of the other kernels tested both sequentially and in parallel. For one benchmark TasKit is shown to outperform a conventional sequential simulation kernel by a factor of 3. For the same benchmark TasKit is shown to outperform the best of the other parallel kernels tested by a factor of 6. The paper explains how this performance advantage is achieved and cautions that additional research into automatic model partitioning will be essential to make this technology accessible to the general simulation community.

Tuesday 3:30:00 PM 5:00:00 PM
Modeling Very Large Scale Systems

Chair: Lee Schruben (University of California, Berkeley)

One-to-One Modeling and Simulation of Unbounded Systems: Experiences and Lessons
Rohyt V. Belani, Saumitra M. Das, and David Fisher (Carnegie Mellon University)

Conventional computer modeling and simulation have focused on computer objects that represent elements of the real world. In this paper we present a new approach to modeling and simulation in which authors describe the characteristics of the world being simulated without specifying how they are to be represented as computer objects. This approach is enabled by the Easel modeling and simulation system (EMSS). Furthermore, this approach does not assume global visibility and centralized control, which are inherently inaccurate assumptions for unbounded systems (in which participants have incomplete or imprecise information about the system as a whole). Because this approach allows models to be one-to-one with the real world, models should be more accurate and simulations more realistic. The discussion includes the challenges faced in the modeling and simulation process of the distance vector IP routing protocol over a large-scale communications network, and the language features intended to address these problems.

Using Simulation Modeling to Assess Rail Track Infrastructure in Densely Trafficked Metropolitan Areas
Maged M. Dessouky and Quan Lu (University of Southern California) and Robert C. Leachman (University of California, Berkeley)

We present a simulation modeling methodology to assess the rail track infrastructure in highly dense traffic areas. We used this model to determine the best trackage configuration to meet future demand in the Los Angeles-Inland Empire Trade Corridor Region. There are three major challenges in modeling a rail network in a densely trafficked metropolitan area. They are: (1) complex trackage configurations, (2) various speed limits, and (3) non-fixed dispatching timetables and routes between the origin and destination. Our proposed model has the ability to handle the above complexities in order to determine the best use of the rail capacity. Furthermore, our methodology is general enough so that it can be applied to other large scale rail networks.

Building Complex Models with LEGOs (Listener Event Graph Objects)
Arnold H. Buss and Paul J. Sánchez (Naval Postgraduate School)

Event Graphs are a simple and elegant language-independent way of representing a Discrete Event Simulation (DES) model. In this paper we propose an extension to basic Event Graphs that enables small models to be encapsulated in reusable modules called Listener Event Graph Objects (LEGOs). These modules are linked together using a design pattern from Object Oriented Programming called the "listener pattern" to produce new modules of even greater complexity. The modules generated in this way can themselves be linked and encapsulated, forming a hierarchical design which is highly scalable. These concepts have been implemented in Simkit, a freely available simulation package implemented in Java.

Wednesday 8:30:00 AM 10:00:00 AM
Methods and Tools for Aerospace Operations Modeling and Simulation

Chair: Perakath Benjamin (Knowledge based Systems)

New Perspectives Towards Modeling Depot MRO
Frank Boydstun (Oklahoma City Air Logistics Center), Michael Graul and Perakath Benjamin (Knowledge Based Systems, Inc.) and Michael Painter (Knowledge Based Systems, Inc..)

There are subtle, and yet critical and unique differences that distinguish the depot maintenance, repair, and overhaul (MRO) domain from production manufacturing. These differences motivate the need for more efficient ways to capture the essence of the depot MRO domain dynamics. The authors provide an informal characterization of the depot MRO by highlighting some of the major differences. Along with this characterization, they propose a set of principles governing the physics of depot MRO operation. Finally, they describe the nature of idealizations needed to model and simulate this domain and a vision for future technologies that could more adequately and directly address these needs.

Generic Simulation Models of Reusable Launch Vehicles
Martin J. Steele (National Aeronautics & Space Administration), Mansooreh Mollaghasemi (Productivity Apex, Inc.), Ghaith Rabadi (Old Dominion University) and Grant Cates (National Aeronautics and Space Administration)

Analyzing systems by means of simulation is necessarily a time consuming process. This becomes even more pronounced when models of multiple systems must be compared. In general, and even more so in today's fast-paced environment, competitive pressure does not allow for waiting on the results of a lengthy analysis. That competitive pressure also makes it more imperative that the processing performance of systems be seriously considered in the system design. Having a generic model allows one model to be applied to multiple systems in a given domain and provides a feedback mechanism to systems designers as to the operational impact of design decisions.

Modeling the Space Shuttle
Grant R. Cates and Martin J. Steele (NASA), Mansooreh Mollaghasemi (University of Central Florida) and Ghaith Rabadi (Old Dominion University)

We summarize our methodology for modeling space shuttle processing using discrete event simulation. Why the project was initiated, what the overall goals were, how it was funded, and who were the members of the project team are identified. We describe the flow of the space shuttle flight hardware through the supporting infrastructure and how the model was created to accurately portray the space shuttle. The input analysis methodology that was used to populate the model elements with probability distributions for process durations is described in the paper. Verification, validation, and experimentation activities are briefly summarized.

Toolkit for Enabling Adaptive Modeling and Simulation (TEAMS)
Perakath Benjamin, Michael Graul, and Madhav Erraguntla (Knowledge Based Systems, Inc.)

This paper describes the architecture of a Toolkit for Enabling Adaptive Modeling and Simulation (TEAMS). TEAMS addresses key technical problems associated with Space Transportation System operations process modeling and analysis. TEAMS facilitates collaborative and distributed spaceport operations analysis. Functions supported by TEAMS include (i) knowledge management, (ii) operations modeling, and (iii) operations analysis. Key innovations include (i) a process-centered approach that maximizes re-use of domain knowledge for rapid operations analysis model development, (ii) open-architecture, distributed plug and play architecture that allows for mass customization and rapid deployment of TEAMS, and (iii) novel, simulation-based optimization mechanisms. A TEAMS prototype has been developed and demonstrated at Kennedy Space Center.

Wednesday 10:30:00 AM 12:00:00 PM
Reusing Simulation Components

Chair: Hessam S. Sarjoughian (Arizona State University)

Simulation Software and Model Reuse: A Polemic
Michael Pidd (Lancaster University)

Is it really true that simulation models and simulation software should always be regarded as candidates for reuse or is it better to be selective? What are the obstacles to simulation software and model reuse? Can these be surmounted and, if so, at what cost? There is a range of levels at which simulation software may be reused, a range of costs to be borne and range of benefits that may be achieved. It is crucial to consider the issue of validity when considering model reuse and this needs to be a fundamental part of any reuse strategy. There may be circumstances in which reuse is economic, especially when a small, low-fidelity model will suffice.

COST: A Component-Oriented Discrete Event Simulator
Gilbert Chen and Boleslaw K. Szymanski (Rensselaer Polytechnic Institute)

COST (Component-Oriented Simulation Toolkit) is a general-purpose discrete event simulator. The main design purpose of COST is to maximize the reusability of simulation models without losing efficiency. To achieve this goal, COST adopts a component-based simulation worldview based on a component-port model. A simulation is built by configuring and connecting a number of components, either off-the-shelf or fully customized. Components interact with each other only via input and output ports, thus the development of a component becomes completely independent of others. The component-port model of COST makes it easy to construct simulation components from scratch. Implemented in C++, COST also features a wide use of templates to facilitate language-level reuse.

Generalizing: Is it Possible to Create All-Purpose Simulations?
Glenn P. Rioux (U.S. Navy) and Richard E. Nance (Virginia Tech)

The title poses the essential question addressed herein: Is it possible to construct simulations that permit use in application domains with widely ranging objectives? The question is raised in a tentative explanation of what is entailed in an answer. Beginning with a taxonomy based on simulation objectives, we identify differences among the categories with respect to what is attendant in realizing different objectives and in using associated methodologies and tools. The closing summary highlights the importance of producing an answer or eliminating the question.

[ Return to Program ]