Date Available

12-7-2011

Year of Publication

2009

Degree Name

Master of Science in Electrical Engineering (MSEE)

Document Type

Thesis

College

Engineering

Department

Electrical Engineering

First Advisor

Dr. Henry (Hank) Dietz

Abstract

GPUs (Graphics Processing Units) employ a multi-threaded execution model using multiple SIMD cores. Compared to use of a single SIMD engine, this architecture can scale to more processing elements. However, GPUs sacrifice the timing properties which made barrier synchronization implicit and collective communication operations fast.

This thesis demonstrates efficient methods by which these aggregate functions can be implemented using unmodified NVIDIA CUDA GPUs. Although NVIDIA's highest “compute capability" GPUs provide atomic memory functions, they have order N execution time. In contrast, the methods proposed here take advantage of basic properties of the GPU architecture to make implementations that are both efficient and portable to all CUDA-capable GPUs. A variety of coordination operations are synthesized, and the algorithm, CUDA code, and performance of each are discussed in detail.

Share

COinS