Changes

Jump to: navigation, search

GPU610/gpuchill

2,250 bytes added, 17:28, 10 March 2019
Christopher Ginac Image Processing Library
It seems most of our time in this part of the code is spent assigning our enlarged image to the now one, and also creating our image object in the first place. I think if we were to somehow use a GPU for this process, we would see an decrease in run-time for this part of the library. Also, there also seems to be room for improvement on the very 'Image::enlargeImage' function itself. I feel like by loading said functionality onto thje GPU, we can reduce it's 0.76s to something even lower.
 
Using the same image as above (16MB file), I went ahead and profile the Negate option as well. This as the name implies turns the image into a negative form.
<pre>
real 0m5.707s
user 0m0.000s
sys 0m0.000s
</pre>
 
As you can see, about half the time of the Enlarge option, which is expect considering you're not doing as much.
 
<pre>
Flat profile:
 
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
23.53 0.16 0.16 2 80.00 80.00 Image::Image(Image const&)
16.18 0.27 0.11 2 55.00 55.00 Image::Image(int, int, int)
14.71 0.37 0.10 _fu62___ZSt4cout
13.24 0.46 0.09 17117346 0.00 0.00 Image::getPixelVal(int, int)
13.24 0.55 0.09 1 90.00 90.00 Image::operator=(Image const&)
7.35 0.60 0.05 1 50.00 140.00 writeImage(char*, Image&)
7.35 0.65 0.05 1 50.00 195.00 Image::negateImage(Image&)
4.41 0.68 0.03 17117346 0.00 0.00 Image::setPixelVal(int, int, int)
0.00 0.68 0.00 4 0.00 0.00 Image::~Image()
0.00 0.68 0.00 3 0.00 0.00 std::operator|(std::_Ios_Openmode, std::_Ios_Openmode)
0.00 0.68 0.00 1 0.00 0.00 readImageHeader(char*, int&, int&, int&, bool&)
0.00 0.68 0.00 1 0.00 0.00 readImage(char*, Image&)
0.00 0.68 0.00 1 0.00 0.00 Image::getImageInfo(int&, int&, int&)
</pre>
 
Notice in both cases of the Enlarge and Negate options the function "Image::Image(int, int, int)" is always within the top 3 of functions that seem to take the most time. Also, the functions "Image::setPixelVal(int, int, int)" and
"Image::getPixelVal(int, int)" are called very often. I think if we focus our efforts on unloading the "Image::getPixelVal(int, int)" and "Image::setPixelVal(int, int, int)" functions onto the GPU as I imagine they are VERY repetitive tasks, as well as try and optimize the "Image::Image(int, int, int)" function; we are sure to see an increase in performance for this program.
==== Merge Sort Algorithm ====
The program I decided to assess, and profile calculates the value of PI by using the approximation method called Monte Carlo. This works by having a circle that is 𝜋r2 and a square that is 4r2 with r being 1.0 and generating randomized points inside the area, both x and y being between -1 and 1 we keep track of how many points have been located inside the circle. The more points generated the more accurate the final calculation of PI will be. The amount of points needed for say billionth precision can easily reach in the hundreds of billions which would take just as many calculations of the same mathematical computation, which makes it a fantastic candidate to parallelize.
====== Figure 1 ======
[[File:Pi_calc.png]]
<br/>
Figure 1: Graphical representation of the Monte Carlo method of approximating PI
----====== Figure 2 ======
{| class="wikitable mw-collapsible mw-collapsed"
! pi.cpp
</source>
|}
====== Figure 2 ======
Figure 2: Serial C++ program used for profiling of the Monte Carlo method of approximating PI
===== Results =====
You need many billions of points and maybe even trillions to reach a high precision for the final result but using just 2 billion dots causes the program to take over 30 seconds to run. The most intensive part of the program is the loop which is what loops 2 billion times in my run of the program while profiling, which can all be parallelized. We can determine from the profiling that 100% of the time executing the program is spent in the loop but of course that is not possible so we will go with 99.9%, using a GTX 1080 as an example GPU which has 20 SMX processors and each having 2048 threads, and using Amdahl's Law we can expect a speedup of 976.191x191 times
=== Assignment 2 ===
=== Assignment 3 ===
29
edits

Navigation menu