next up previous
Next: Conclusion Up: Application Representations for Multi-Paradigm Previous: Implementation and Status


Related Work

We provide a very brief overview of related work, focusing on application representation issues for comprehensive parallel system modeling environments. Additional descriptions of work related to POEMS are available elsewhere [1,4,6,8,21].

Many previous simulation-based environments have been used for studying parallel program performance and for modeling parallel systems (e.g., WWT [18], Maisie [7], SimOS [20], and RSIM [15]). These environments have been based on program-driven simulation, where the application representation is simply the program itself. In these systems, there are no abstractions suitable for driving analytical models or more abstract simulation models. Such models will be crucial to make the study of large-scale applications and systems feasible.

Some compiler-driven tools for performance prediction, namely FAST [11] and Parashar's interpretive framework [17], have used more abstract graph-based representations of parallel programs similar to our static task graph. Parashar's environment also uses a functional interpretation technique for performance prediction which is similar to our compile-time instantiation of the dynamic task graph for POEMS, using the dHPF compiler. Parashar's framework, however, is limited to restricted parallel codes generated by their Fortran90D/HPF compiler, namely codes that use a loosely-synchronous communication model (i.e., alternating phases of computation and global communication) and perform computation partitioning using the owner-computes rule heuristic [19]. In addition, each of these environments focuses on a single performance prediction technique (simulation of message-passing in FAST; symbolic interpretation of analytical formulas in Parashar's framework) whereas our representation is designed to drive a wide range of modeling techniques.

The PACE performance toolset [16] includes a language and runtime environment for parallel program performance prediction and analysis. The language requires users to describe manually the parallel subtasks and computation and communication patterns and can provide different levels of model abstraction. This system too is restricted to a loosely synchronous communication model.

Finally, the PlusPyr project [10] has proposed a parameterized task graph as a compact, problem size independent representation of some frequently used directed acyclic task graphs. Their representation has some important similarities with ours (most notably the use of symbolic integer sets for describing task instances, and the use of symbolic execution time estimates). However, their representation is mainly intended for program parallelization and scheduling (PlusPyr is used as a front-end for the Pyrros task scheduling tool [22]). Therefore, their task graph representation is designed to first extract fine-grain parallelism from sequential programs using dependence analysis, and then to derive communication and synchronization rules from these dependences. In contrast, our representation is designed to capture the structure of arbitrary message-passing parallel programs independent of how the parallelization was performed. It is geared towards the support of detailed performance modeling. A second major difference is that they assume a simple parallel execution model in which a task receives all inputs from other tasks in parallel and sends all outputs to other tasks in parallel. In contrast, we capture much more general communication behavior in order to describe realistic message-passing programs.


next up previous
Next: Conclusion Up: Application Representations for Multi-Paradigm Previous: Implementation and Status
Rizos Sakellariou 2000-09-15