Changes

Jump to: navigation, search

DPS921/OpenACC vs OpenMP Comparison

624 bytes added, 23:53, 3 December 2020
m
no edit summary
When OpenMP and OpenACC works together, it is usually one CPU with several accelerators as that is how OpenMP works. When there are multiple CPUs and each have access to multiple accelerators, OpenMP will not be enough, and we can introduce MPI.
As we learned that MPI is used to allow communication and data transfer between threads during parallel execution. In the case of multiple accelerators, one of the ways we can use the two together is to use MPI to communicate between different accelerators.
 
The following is a screenshot taken from Nvidia's Advanced OpenACC lecture, showing how does MPI works with OpenACC.
[[File: Mpiopenacc.png|800px]]
 
= References =
https://developer.nvidia.com/blog/benchmarking-cuda-aware-mpi/
 
https://developer.nvidia.com/hpc-sdk
 
https://gcc.gnu.org/wiki/OpenACC
 
https://on-demand.gputechconf.com/gtc/2015/webinar/openacc-course/advanced-openacc-techniques.pdf
 
https://on-demand.gputechconf.com/gtc/2016/presentation/s6510-jeff-larkin-targeting-gpus-openmp.pdf
 
https://on-demand.gputechconf.com/gtc/2016/webinar/openacc-course/Advanced-OpenACC-Course-Lecture2--Multi-GPU-20160602.pdf
36
edits

Navigation menu