The Scalable Modeling System (SMS) is a directive-based parallelization tool. The user inserts directives in the form of comments into existing Fortran code. SMS translates the code and directives into a parallel version that runs on shared and distributed memory highperformance computing platforms. Directives are available to support array re-sizing, interprocess communications, loop translations, and parallel output. SMS also provides debugging tools that significantly reduce code parallelization time. SMS is intended for applications using regular structured grids that are solved using explicit finite difference approximation (FDA) or spectral methods. It has been used to parallelize ten atmospheric and oceanic models but the tool is sufficiently general that it can be applied to other structured grids codes. The performance of SMS parallel versions of the Eta atmospheric and Regional Ocean Modeling System (ROMS) oceanic models is analyzed. The analysis demonstrates that SMS adds insignificant overhead compared to hand-coded Message Passing Interface (MPI) solutions in these cases. This research shows that, for the ROMS model, using a distributed memory parallel approach on a cache-based shared memory machine yields better performance than an equivalent shared-memory solution due to false sharing. We also find that the ability of compilers/machines to efficiently handle dynamically allocated arrays is highly variable. Finally, we show that SMS demonstrates the performance benefit gained by allowing the user to explicitly place communications. We call for extensions of the High Performance Fortran (HPF) standard to support this capability.
This publication was presented at the following:
Not available
Authors who have authored or contributed to this publication.
Not available