CCES Unicamp

Data-flow analysis and optimization for data coherence in heterogeneous architectures

Although heterogeneous computing has enabled developers to achieve impressive program speed-ups, the cost of moving and keeping data coherent between host and device may easily eliminate any performance gains achieved by acceleration. To deal with this problem, this paper introduces DCA: a pair of two data-flow analyses that determine how variables are used by host/device at each program point. It also introduces DCO, a code optimization technique that uses DCA information to: (a) allocate OpenCL shared buffers between host and devices; and (b) insert appropriate OpenCL function calls into program points so as to minimize the number of data coherence operations. We have used the AClang compiler to measure the impact of DCA and DCO when generating code from Parboil, Polybench and Rodinia benchmarks for a set of discrete/integrated GPUs. The experimental results showed speed-ups of up to 5.25x (average of 1.39x) on an ARM Mali-T880 and up to 8.87x (average of 1.66x) on an NVIDIA GPU Pascal Titan X.
Sousa, R., Pereira, M., Pereira, F. M. Q., & Araujo, G. (2019). Data-flow analysis and optimization for data coherence in heterogeneous architectures. Journal of Parallel and Distributed Computing, 130, 126-139.

Related posts

The Case for Phase-Based Transactional Memory

cces cces

CCES Genome Research Highlighted by Fapesp

cces cces

Prediction of kinetics of protein folding with non-redundant contact information

WP Twitter Auto Publish Powered By :