Changes

Jump to: navigation, search

DPS915/CodeCookers

3,230 bytes added, 01:51, 4 December 2014
Assignment 3
'''Problem Description'''
In the second assignment our team had two options to select from. These options were either a prime number calculator or a calculator of Pi. After some discussion, we selected to work with the PI calculation problem. We took Norbert’s original CPU program and we ported it to the GPU which sped up the processing rate, resulting in a faster processing time overall. In this assignment we experimented more with our CUDA solution and utilizing two different forms of optimization, we managed to get an even faster computation speed of the overall program.
 
'''Optimization 1'''
 
One of the main problems of our previous solution was that it was taking a much longer execution time than we were expecting. During the analysis of the program, we realized that we were wasting precious computation time in generating random numbers on the host and copying them over to the device. For this reason we decided to generate the random numbers directly on the device using cuRAND. This allowed us to reduce the run time to a quarter of what it took to run before.
 
The following code sample demonstrates this optimization:
[[File:A3p1.PNG]]
 
'''Optimization 2'''
 
The second optimization we made was that we implemented a reduction algorithm to help with a couple of things. Firstly it lessened the amount of data being copied from the GPU from a potential 60 million integers to just 65 thousand integers which ends up being ~923 times less items to copy. Secondly and most importantly, it did most of the additions and made it much faster (less iterations) to add up the remaining partial sums. We also implemented the thread divergence method in the reduction algorithm for even better results. This method halved the time required after our initial optimization, effectively reducing the total run time by an eighth of the original time.
 
The following code sample demonstrate this optimization:
[[File:A3P2.PNG]]
'''Program execution '''
 
The following table and chart compare the original CPU runtime to the GPU runtime with the final optimized runtime.
 
[[File:A3P3.PNG ]]
 
 
 
[[File:A3P4.PNG |1100px]]
[[File:A3P3.PNG ]]
'''Conclusion'''
[[File:A3P4 While coding the algorithm for this problem, we had to go through many iterations, first off we had to create it to run serially on the CPU. These results, while stable, were quite slow to process when it came to larger numbers. Next came the GPU port. This section was the bulk of the work over the three assignments as it required us to completely redesign how the program would function. We can say with confidence that based on the results of the port, it would not be worthwhile to even bother implementing the cuda code as the improvement was marginal at best. The third iteration however, when we optimized the code, showed a dramatic improvement in performance. Eight times faster than the GPU port from the second assignment, the optimization blew us away as to the effect that optimization can have on a program. The actual calculations on the arrays alone only ended up taking 35nsec. All in all, from the results collected, we can conclude that there is substantial evidence that parallelization of any form of Monte Carlo or repetitive program involving millions of small calculations would highly benefit from using the GPU.PNG |thumb|1000px|CENTER]]

Navigation menu