Changes

Jump to: navigation, search

GPU610/Cosmosis

4,436 bytes added, 14:20, 13 April 2013
Assignment 3: first draft
=== Assignment 3 ===
 
==== Problem Overview ====
An N-body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity. In cosmology, they are used to study processes of non-linear structure formation such as the process of forming galaxy filaments and galaxy halos from dark matter in physical cosmology. Direct N-body simulations are used to study the dynamical evolution of star clusters.
 
To be able to successfully have an N-body simulation, a program must go through each body, and add the forces of each other body that is affecting it to update where its position would be in the simulation. Due to this, the algorithm that is used to do these calculations has a time complexity of O(n^2). Having the time increase exponentially as n increases linearly makes the simulation rather slow, and thus a perfect candidate for parallelization.
 
==== Baseline ====
 
==== Initial Profiling ====
 
Initially, the serial version of this program took about 13 minutes to calculate 512 samples in a 5000-body simulation. Even with the use of Steaming SIMD Extensions, the program took about 7 minutes to do the same test.
 
==== Basic Parallelization ====
 
* Turned old serial code where the program bottlenecked to into two separate kernels
 
==== Optimized Parallelization ====
 
* Changed the launch configuration for the kernels so there were no wasted threads (based on devices compute capabilities)
* Prefetched values that don’t change throughout the loops
* Did computations in the kernel to reduce function overhead
* Used constant memory for the gravitation constant
 
==== Profiles ====
 
===== Profile #1 =====
[[Image:]]
 
 
[[Image:]]
 
To be able to see the difference between the pre and post optimized code, this graph does not include the serial cpu timings.
 
===== Profile #2 =====
Our second profile again consists of running simulations for 240 seconds to determine how many samples we achieve per second, and how many total samples we end up with after four minutes.
 
[[Image:]]
 
Optimized GPU after four minutes.
 
[[Image:]]
 
Naive GPU Samples after four minutes.
 
Comparing our results from the previous GPU implementation, we managed to achieve a total of 188072 samples compared to 88707. Roughly a 112.015 % increase in the number of samples completed in four minutes. Compared with our CPU code, the optimized GPU code is 1421.741% faster.
 
==== Test Suite ====
 
During the initial stages of our optimizations, we noticed that incorrect data started showing up after some changes. In order to ensure that even after our optimizations the data was still correct we had to develop a comprehensive test suite. The test suite goes through multiple tests and compares host values (assumed 100% correct) to the device values. These values are compared using their final position after a number of samples. The test suite allows for 1.0 difference in values to compensate for floating-point errors.
 
==== Conclusions ====
 
Through the use of CUDA, we managed to achieve a total of 4229.33% speedup in time from serial CPU to the final optimized GPU. We used many basic techniques to achieve a speedup of 35.4% from the pre-optimized code, to the post-optimized code. There were several different parallelization techniques that we did not manage to get to work with our program that could have sped it up even further. One such thing was shared memory.
 
Our kernels accessed the same bodies for the calculations so we tried to implement shared memory so that threads in a current block can access them faster. It worked when n bodies was less than 1755 for graphics cards with a compute capability of 2.x. This is due to the fact that a body took up 28 bytes in memory, hence why 1755 bodies would not work because it took up 49,140 bytes (greater than the max shared memory a 2.x graphics card can hold: 48K). There was a roundabout way of feeding the kernel chunks of bodies at a time that only worked on some occasions, so we ended up scrapping it.
 
We initially intended on using the fast-math library provided by CUDA. At first our results were marginally faster than our previous code. Though after some optimizations we discovered that our code actually performed better than the fast-math library. With fast-math, it took 0.451889 seconds to process 1000 bodies for 512 samples, conversely without fast-math we got 0.409581 seconds, which is a considerable improvement.
1
edit

Navigation menu