High Performance Runtime for PGAS Models (OpenSHMEM, UPC and CAF)

Overview

Partitioned Global Address Space (PGAS) languages are growing in popularity because of their ability to provide shared memory programming model over distributed memory machines. In this model, data can be stored in global arrays and manipulated by individual compute threads. This model shows promise for expressing algorithms that have irregular computation and communication patterns. It is unlikely that such applications written in MPI will be re-written using the emerging PGAS languages in the near future. But, it is more likely that parts of these applications will be converted using newer models. This requires that underlying implementation of system software be able to support multiple programming models simultaneously.

Objectives

In this research work, we propose a Unified Runtime for supporting multiple programming models, Currently our high performance runtime provides support for MPI and UPC (Unified Parallel C) languages. Our high performance runtime provides built-in support for load balancing and work-stealing using multi-end point design. We also proposed features like 'UPC Queues' for expressing irregular applications in UPC, in a high performance manner.

Journals (1)

1 K. Hamidouche, A. Venkatesh, Ammar Awan, H. Subramoni, and DK Panda, CUDA-Aware OpenSHMEM: Extensions and Designs for High Performance OpenSHMEM on GPU Clusters, ParCo: Elsevier Parallel Computing Journal ,

Conferences & Workshops (23)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

Ph.D. Disserations (2)

1 J. Jose, Designing High Performance and Scalable Unified Communication Runtime (UCR) for HPC and Big Data Middleware, Aug 2014
2 S. Potluri, Enabling Efficient Use of MPI and PGAS Programming Models on Heterogeneous Clusters with High Performance Interconnects, May 2014