Conference Paper

A language for specification and programming of reconfigurable parallel computation structures.

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... But development of e ective abstraction hierarchies is not simple. We propose that de nition and realization of e ective abstraction hierarchies should be based on the principle of separation of concerns 4,5]. Construction of abstraction hierarchies based on separation of concerns is discussed in detail in Section 2.2. ...
... It can be seen from Figures 3.6 and 3.7 that design structure derived using our design model based on separation of concerns and hierarchical abstractions directly complements its implementation class hierarchy, thereby preserving all the attributes of the design in the implementation. 4. Applications of HDDA/DAGH. ...
Article
Full-text available
. This paper defines, describes and illustrates a systems engineering process for development of software systems implementing high performance computing applications. The example which drives the creation of this process is development of a flexible and extendible program development infrastructure for parallel structured adaptive meshes, the HDDA/DAGH package. The fundamental systems engineering principles used (hierarchical abstractions based on separation of concerns) are well-known but are not commonly applied in the context of high performance computing software. Application of these principles will be seen to enable implementation of an infrastructure which combines breadth of applicability and portability with high performance. Key words. Software systems engineering, Structured adaptive mesh-refinement, High performance software development, Distributed dynamic data-structures. 1. Overview. This paper describes the systems engineering process which was followed in the develo...
... Examples of languages designed to support either the SIMD or SPMD modes of data-parallel programming include CM-Fortran [ChC92], HPF [HPF92], and Fortran-D [HiK91]. Other languages, such as CSL [BrT82], Hellena [AuB87], and ELP [NiS93], have been developed for machines that are capable of mixed-mode parallelism, and include language elements for both SIMD and SPMD (or MIMD) computation. These "mixed-mode" languages provide support for executing different portions of the same program in different modes (e.g., SIMD versus SPMD). ...
Article
Full-text available
Complete application tasks, of the type that would be of interest to Rome Laboratory, are large and complex. One approach to dealing with them is heterogeneous computing. Two types of heterogeneous computing systems are: (1) mixed-mode, wherein multiple types of parallelism are available on a single machine; and (2) mixed-machine, wherein a suite of different high-performance computers is connected by high-speed links. In this effort, we studied ways to decompose an application into subtasks and then match each subtask to the mode or machine, which results in the smallest total task execution time. Our accomplishments include: (1) conducting a mixed-mode case study; (2) developing an approach for automatically decomposing a task for mixed-mode execution, and assigning modes to subtasks; (3) extending this approach for use as an heuristic for a particular class of mixed-machine heterogeneous computing systems; (4) surveying the state-of-the-art of heterogeneous computing, and constructing a conceptual framework for automatic mixed-machine heterogeneous computing; (5) examining how to estimate non-deterministic execution of subtasks and complete tasks; and (6) devising an optimal scheme for inter-machine data transfers for a given matching of subtasks to machines.
... The signicance of separating sequential application program components from the ways in which these components interact has long been recognized. In early systems, component interaction was specied in separate text les [12]. The advent o f w orkstation technology and graphical user interfaces (GUI) greatly enhanced the ease, eciency and eectiveness of specifying parallel structures [6, 1 0 , 24]. ...
Conference Paper
Full-text available
For almost a decade we have been working at developing and using template-based models for coarse-grained parallel computing. Our initial system, FrameWorks, was positively received but had a number of shortcomings. The Enterprise parallel programming environment evolved out of this work, and now, after several years of experience with the system, its shortcomings are becoming evident. This paper outlines our experiences in developing and using the two parallel programming systems. Many of our observations are relevant to other parallel programming systems, even though they may be based on different assumptions. Although template-base models have the potential for simplifying the complexities of parallel programming, they have yet to realize these expectations for high-performance applications.
... Separation of sequential and multiprogramming features has also been advocated in Browne et. al. [3]. Fourth, Seuss severely restricts the amount of control available to the programmer at the multiprogramming level. ...
Article
Object-based sequential programming has had a major impact on software engineering. However, object-based concurrent programming remains elusive as an effective programming tool. The class of applications that will be implemented on future high-bandwidth networks of processors will be significantly more ambitious than the current applications (which are mostly involved with transmissions of digital data and images), and object-based concurrent programming has the potential to simplify designs of such applications. Many of the programming concepts developed for databases, object-oriented programming and designs of reactive systems can be unified into a compact model of concurrent programs that can serve as the foundation for designing these future applications. We propose a model of multiprograms and a discipline of programming that addresses the issues of reasoning (e.g., understanding) and efficient implementation. The major point of departure is the disentanglement of sequential and multiprogramming features. We propose a sparse model of multiprograms that distinguishes these two forms of computations and allows their disciplined interactions.
... The signicance of separating sequential application program components from the ways in which these components interact has long been recognized. In early systems, component interaction was specied in separate text les [12]. The advent o f w orkstation technology and graphical user interfaces (GUI) greatly enhanced the ease, eciency and eectiveness of specifying parallel structures [6, 1 0 , 24]. ...
Article
For almost a decade we have been working at developing and using template-based models for coarse-grained parallel computing. Our initial system, FrameWorks, was positively received but had a number of shortcomings. The Enterprise parallel programming environment evolved out of this work, and now, after several years of experience with the system, its shortcomings are becoming evident. This paper outlines our experiences in developing and using the two parallel programming systems. Many of our observations are relevant to other parallel programming systems, even though they may be based on different assumptions. Although template-base models have the potential for simplifying the complexities of parallel programming, they have yet to realize these expectations for high-performance applications. 1 Introduction Along with the growing interest in parallel and distributed computing, there has been a corresponding increase in the development of models, tools and systems for parallel progra...
Conference Paper
Algorithms designed for highly parallel processing often require specific interprocess communication topologies, including vectors, meshes, trees, toruses and cubeconnected structures. Static communication structures are naturally expressed as graphs with regular properties, but this level of abstraction is not supported in current environments. Our approach to programming massively parallel processors involves a graph editor, which allows the programmer to specify communication structures graphically. As a foundation for graph editor operations, we are currently investigating properties of aggregate rewriting graph grammars which rewrite, in parallel, aggregates of nodes whose labels are logically related. We have found these grammars to be efficient in their description of many recursively defined graphs. Languages generated by these grammars can be associated with families of graphs. We also suggest extensions to the formalism that make use of extended labeling information that would be available in graph editors.
Article
The need for massive computations, frequent in real-time applications, has accelerated the interest in parallel processing. While many hardware architecture approaches have been suggested, there has been only limited progress in programming and evaluating the effectiveness of parallel processing. This article focuses on parallel software development methodology and associated automatic systems that facilitate progressive improvement of parallel performance through repeatedly perturbing the parallelism partitioning and analyzing the resulting performance. Parallel programming in the past has been motivated by the desire to reduce extremely long computation times. The emphasis here is on problems which involve both massive computations and real-time requirements, since we consider this combination to be the prime future area for use of parallel processing.
Article
For almost a decade we have been working at developing and using template-based models for parallel computing. Template-based models separate the specification of the parallel structuring aspects from the application code that is to be parallelized. A user provides the application code and specifies the parallel structure of the application using high-level icons, called templates. The parallel programming system then generates the code necessary for parallelizing the application. The goal here is to provide a mechanism for quick and reliable development of coarse-grain parallel applications that employ frequently occurring parallel structures. Our initial template-based system, FrameWorks, was positively received but had a number of shortcomings. The Enterprise parallel programming environment evolved out of this work. Now, after several years of experience with the system, its shortcomings are becoming evident. Controlled experiments have been conducted to assess the usability of our...
Conference Paper
Full-text available
PASM is a concept for a parallel processing system that allows experimentation with different architectural design alternatives. PASM is dynamically reconfigurable along three dimensions: partitionability into independent or communicating submachines, variable interprocessor connections, and mixed-mode SIMD/MIMD parallelism. With mixed-mode parallelism, a program can switch between SIMD (synchronous) and MIMD (asynchronous) parallelism at instruction-level granularity, allowing the use of both modes in a single machine. The PASM concept is presented, showing the ways in which reconfiguration can be accomplished. Trade-offs among SIMD/MIMD, and mixed-mode parallelism are explored. The small-scale PASM prototype with 16 processing elements is described. The ELP mixed-mode programming language used on the prototype is discussed. An example of a prototype-based study that demonstrates the potential of mixed-mode parallelism is given
Article
Features of an explicitly parallel programming language targeted for reconfigurable parallel processing systems, where the machine's N processing elements (PEs) are capable of operating in both the SIMD and SPMD modes of parallelism, are described. The SPMD (single program-multiple data) mode of parallelism is a subset of the MIMD mode where all processors execute the same program. By providing all aspects of the language with an SIMD mode version and an SPMD mode version that are syntactically and semantically equivalent, the language facilitates experimentation with and exploitation of hybrid SIMD/SPMD machines. Language constructs (and their implementations) for data management, data-dependent control-flow, and PE-address-dependent control-flow are presented. These constructs are based on experience gained from programming a parallel machine prototype and are being incorporated into a compiler under development. Much of the research presented is applicable to general SIMD machines and MIMD machines
Article
The authors describe CODE (computation-oriented display environment), which can be used to develop modular parallel programs graphically in an environment built around fill-in templates. It also lets programs written in any sequential language be incorporated into parallel programs targeted for any parallel architecture. Broad expressive power was obtained in CODE by including abstractions of all the dependency types that occur in the widely used parallel-computation models and by keeping the form used to specify firing rules general. The CODE programming language is a version of generalized dependency graphs designed to encode the unified parallel-computation model. A simple example is used to illustrate the abstraction level in specifying dependencies and how they are separated from the computation-unit specification. The most important CODE concepts are described by developing a declarative, hierarchical program with complex firing rules and multiple dependency types.< >
Article
We have developed a programming model that integrates concurrency with object-based programming. The model includes features for object definition and instantiation, and it supports concurrent executions of designated methods of the object instances. Yet, the model includes no specific communication or synchronization mechanism, except procedure call. The traditional schemes for communication, synchronization, interfaces among processes and accesses to shared memory can be encoded by objects in our model. Concurrency in the model is transparent to the programmer; the programmer believes that the program executes in a sequential manner whereas the implementation employs concurrent threads to gain efficiency.
ResearchGate has not been able to resolve any references for this publication.