Date Available

12-6-2017

Year of Publication

2017

Degree Name

Master of Science in Electrical Engineering (MSEE)

Document Type

Master's Thesis

College

Engineering

Department/School/Program

Electrical and Computer Engineering

First Advisor

Dr. Henry Dietz

Abstract

Most systems in HPC make use of hierarchical designs that allow multiple levels of parallelism to be exploited by programmers. The use of multiple multi-core/multi-processor computers to form a computer cluster supports both fine-grain and large-grain parallel computation. Aggregate function communications provide an easy to use and efficient set of mechanisms for communicating and coordinating between processing elements, but the model originally targeted only fine grain parallel hardware. This work shows that a hierarchical implementation of aggregate functions is a viable alternative to MPI (the standard Message Passing Interface library) for programming clusters that provide both fine grain and large grain execution. Performance of a prototype implementation is evaluated and compared to that of MPI.

Digital Object Identifier (DOI)

https://doi.org/10.13023/ETD.2017.496

Share

COinS