Date Available
12-7-2011
Year of Publication
2009
Document Type
Thesis
Degree Name
Master of Science in Electrical Engineering (MSEE)
College
Engineering
Department/School/Program
Electrical Engineering
Faculty
Dr. Henry (Hank) Dietz
Faculty
Dr. Stephen D. Gedney
Abstract
GPUs (Graphics Processing Units) employ a multi-threaded execution model using multiple SIMD cores. Compared to use of a single SIMD engine, this architecture can scale to more processing elements. However, GPUs sacrifice the timing properties which made barrier synchronization implicit and collective communication operations fast.
This thesis demonstrates efficient methods by which these aggregate functions can be implemented using unmodified NVIDIA CUDA GPUs. Although NVIDIA's highest “compute capability" GPUs provide atomic memory functions, they have order N execution time. In contrast, the methods proposed here take advantage of basic properties of the GPU architecture to make implementations that are both efficient and portable to all CUDA-capable GPUs. A variety of coordination operations are synthesized, and the algorithm, CUDA code, and performance of each are discussed in detail.
Recommended Citation
Rivera-Polanco, Diego Alejandro, "COLLECTIVE COMMUNICATION AND BARRIER SYNCHRONIZATION ON NVIDIA CUDA GPU" (2009). University of Kentucky Master's Theses. 635.
https://uknowledge.uky.edu/gradschool_theses/635
