Changes

Jump to: navigation, search

GPU610/Team AGC

2,116 bytes added, 04:01, 30 November 2014
Sample Output
= Team AGC =
== Team Members ==
<s>
# [mailto:acooc@myseneca.ca?subject=gpu610 Andy Cooc], Some responsibility
# [mailto:gcastrolondono@myseneca.ca?subject=gpu610 Gabriel Castro], Some other responsibility</s># [mailto:cmarkieta@myseneca.ca?subject=gpu610 Christopher Markieta], Some other All responsibility
[mailto:acooc@myseneca.ca,gcastrolondono@myseneca.ca,cmarkieta@myseneca.ca?subject=gpu610 Email All]
====== System Requirements ======
This project will be built and tested on a <s>Windows 7 64-bit</s> Fedora 20 operating system (as opposed to Fedora[http://www.r-tutor.com/gpu-computing/cuda-installation/cuda6.5-fc20 tutorial], due remember to complications with display [http://www.if-not-true-then-false.com/2011/fedora-16-nvidia-drivers and CUDA-install-guide-disable-nouveau-driver/#troubleshooting blacklist nouveau in your grub config].) with an Intel Core i5-4670K Haswell CPU (overclocked to 4.9 GHz) and an Nvidia GTX 480 GPU (overclocked to 830/924/1660 MHz) manufactured by Zotac with 1.5 GB of VRAM.
mpi_wave will require the OpenMPI library to compile.
Here is the profiling of the original CPU application, with an increased maximum step to better the test comparison, and improve accuracy of calculate the curve for more precisionat the given step value.
<pre>
Since the number of npoints is 800 in total, divided into separate CPU threads, we will never reach the maximum number of threads per block, 1024.
 
====== Sample Output ======
 
Steps: 1
 
[[Image:wave_output1.jpg]]
 
Steps: 500
 
[[Image:wave_output2.jpg]]
 
Steps: 1,000
 
[[Image:wave_output3.jpg]]
 
Steps: 10,000
 
[[Image:wave_output4.jpg]]
At this point, I am noticing the delay in constantly transferring data between the RAM and Video RAM. Splitting the array into multiple sections requires constant checking of the left and right columns of those arrays. Thus, I will re-factor the entire code to use only 1 CPU thread and remove MPI.
====== Optimization ======
After using shared memory and some constant prefetching values to perform operations in the kernel, my GPU no longer crashes on extreme operations involving millions of steps. It also outperforms my CPU running the MPI version of this application in 4 threads running at 4.9 GHz each. Since my video card has 48 KB of shared memory and I am not using more than 20 KB with all of my arrays, I do not need to worry about coalescing my data, since shared memory is much faster. Due to operational limits, the kernel is being killed short of completion by the watchdog of the operation system. Thus I have updated the maximum step count to be 1 million, otherwise the kernel would need to be rethought or be run in Tesla Compute Cluster (TCC) mode with a secondary GPU not being used for display, but I just don't have that kind of money right now. ====== Testing ====== I have written the following script for testing purposes against the MPI implementation in dual-core and quad-core modes, and the CUDA implementation using 1 block of 800 threads: <pre>#!/usr/bin/env bash # 1D Wave Equation Benchmark# output_master() must be commented out# Author: Christopher Markieta set -e # Exit on error MYDIR=$(dirname $0) if [ "$1" == "mpi" ]; then if [ -z $2 ]; then echo "Usage: $0 mpi [2-8]" exit 1 fi  # Number of threads to launch run="mpirun -n $2 $MYDIR/wave.o"elif [ "$1" == "cuda" ]; then run="$MYDIR/wave.o"else echo "Usage: $0 [cuda|mpi] ..." exit 1fi  # 1 millionfor steps in 1 10 100 1000 10000 100000 1000000do time echo $steps | $run &> /dev/nulldone</pre> The final results show that the optimization was a success: [[Image:cuda_wave.jpg]] Although this application might not profit from such large number of steps, it could be useful for scientific computation. The kernel can be improved to support infinitely large number of steps, but I am lacking the hardware and for demonstration purposes, this should be enough.

Navigation menu