Open main menu

CDOT Wiki β

Changes

BETTERRED

31,138 bytes added, 20:30, 12 April 2017
Code
The program can then be executed by running the compiled binary and it will display the time it took to generate the Mandelbrot set and save the pictures.
{| class="wikitable mw-collapsible mw-collapsed"! Mandelbrot CPU( ... )|-|<syntaxhighlight lang== Observations ==="cpp">#include <iostream>#include <complex>#include <vector>#include <chrono>#include <functional>
The program takes a significant amount of time to run as the calculations are being done on the CPU#include "window. There are nested loops present within the program that can be parallelized to make the program fasterh"#include "save_image.h"#include "utils.h"
The code also has the size of the image and the iterations hard// clang++ -coded which can be modified to make the program significantly longer to process and make it tough on the GPU's for benchmarking and stability testing by running the process in a loopstd=c++11 -stdlib=libc++ -O3 save_image.cpp utils. The code is relatively straight forward and the parallelization should also be easy to implement and testcpp mandel.cpp -lfreeimage
// Use an alias to simplify the use of complex type
using Complex = std::complex<float>;
=== Hotspot ===// Convert a pixel coordinate to the complex domainComplex scale(window<int> &scr, window<float> &fr, Complex c) { Complex aux(c.real() / (float)scr.width() * fr.width() + fr.x_min(), c.imag() / (float)scr.height() * fr.height() + fr.y_min()); return aux;}
Hotspot for // Check if a point is in the program was found in set or escapes to infinity, return the fractalnumber if iterationsint escape() Complex c, int iter_max, const std::function which calls the get_iterations<Complex(Complex, Complex) function that contains 2-nested for loops and a call to escape(> &func) which contains a while loop. Profiling the runtime with Instruments on OSX displayed that the fractal{ Complex z(0) function took up the most amount of runtime and this is the function that will be parallelized using CUDA. Once the function is parallelized, the iterations and size of the image can be increased in order to make the computation relatively stressful on the GPU to get a benchmark or looped in order to do stress testing for GPUs.; int iter = 0;
while (abs(z) < 2.0 && iter < iter_max) {
z = func(z, c);
iter++;
}
return iter;
}
// Loop over each pixel from our image and check if the points associated with this pixel escape to infinityvoid get_number_iterations(window<int> &scr, window<float> &fract, int iter_max, std::vector<int> &colors, const std::function<Complex( Complex, Complex)> &func) { int k =0, progress =-1; for(int i = Profiling Data Screenshots scr.y_min(); i < scr.y_max(); ++i) { for(int j =scr.x_min(); j < scr.x_max(); ++j) { Complex c((float)j, (float)i); c =scale(scr, fract, c); colors[k] =escape(c, iter_max, func); k++; } if(progress < (int)(i*100.0/scr.y_max())){ progress = (int)(i*100.0/scr.y_max()); std::cout << progress << "%\n"; } }}
Profile void fractal(window<int> &scr, window<float> &fract, int iter_max, std::vector<int> &colors, const std::function<Complex( Complex, Complex)> &func, const char *fname, bool smooth_color) { auto start = std::chrono::steady_clock::now(); get_number_iterations(scr, fract, iter_max, colors, func); auto end = std::chrono::steady_clock::now(); std::cout << "Time to generate " << fname << " = " << std::chrono::duration <float, std::milli> (end - start).count() << " [httpsms]" << std:://drive.google.com/open?id=0B2Y_atB3DptbUG5oRWMyUGNQdlU Profile]endl;
Hotspot Code - [https: //drive.google.com/open?id=0B2Y_atB3DptbRlhCUTNyeEFDbEk Hotspot Code]Save (show) the result as an image plot(scr, colors, iter_max, fname, smooth_color);}
void mandelbrot() { // Define the size of the image window<int> scr(0, 1000, 0, 1000); // The domain in which we test for points window<float> fract(-2.2, 1.2, ---1.7, 1.7);
== Introduction : GPU Benchmarking //Testing for NBody : Joshua Kraitberg =The function used to calculate the fractal auto func =[] (Complex z, Complex c) -> Complex {return z * z + c; };
This program uses Newtonian mechanics and a four-order symplectic Candy-Rozmus integration int iter_max = 500; const char *fname = "mandelbrot.png"; bool smooth_color = true; std::vector<int> colors(a symplectic algorithm guarantees exact conservation of energy and angular momentum)scr. The initial conditions are obtained from JPL Horizons, ahd constants size(like masses, gravitational constant) are those recommended by the International Astronomical Union. The program currently does not take into account effects like general relativity, the non-spherical shapes of celestial objects, tidal effects on Earth, etc. It also does not take the 500 asteroids used by JPL Horizons into accound in its model of the Solar System.);
[https: //githubExperimental zoom (bugs ?).comThis will modify the fract window (the domain in which we calculate the fractal function) /fding/nbody Source]zoom(1.0, -1.225, -1.22, 0.15, 0.16, fract); //Z2 fractal(scr, fract, iter_max, colors, func, fname, smooth_color);}
=== Compilation Instructions: ===void triple_mandelbrot() { // Define the size of the image window<int> scr(0, 2000, 0, 2000); // The domain in which we test for points window<float> fract(-1.5, 1.5, -1.5, 1.5);
For Unix /Linux based systems:/ The function used to calculate the fractal auto func = [] (Complex z, Complex c) -> Complex {return z * z * z + c; };
g++ - int iter_max = 500; const char *fname = "triple_mandelbrot.png"; bool smooth_color = true; std=c++11 c++::vector<int> colors(scr.size());  fractal(scr, fract, iter_max, colors, func, fname, smooth_color);} int main() {  mandelbrot(); // triple_mandelbrot();  return 0;} </nbody.cppsyntaxhighlight>|}
=== Observations ===
The program is quite fast for takes a significant amount of time to run as the calculations are being a single-threaded done on the CPU application. Almost all There are nested loops present within the CPU time program that can be parallelized to make the program faster. The code also has the size of the image and the iterations hard-coded which can be modified to make the program significantly longer to process and make it tough on the GPU's for benchmarking and stability testing by running the process in a loop. The code is spent manipulating data relatively straight forward and iterating in vectorsthe parallelization should also be easy to implement and test
=== Hotspot ===
Essentially all Hotspot for the time spent running is spent program was found in the doing calculation on vectors. The dowork fractal() function iteratively which calls the CRO_step get_iterations() function found in integratorsthat contains 2-nested for loops and a call to escape() which contains a while loop.h file. The CRO_step Profiling the runtime with Instruments on OSX displayed that the fractal() function is where took up the most amount of runtime and this is the vector calculations take placefunction that will be parallelized using CUDA. A large amount of is also done in Once the calculate_a function which is used parallelized, the iterations and size of the image can be increased in order to calulate make the acceleration computation relatively stressful on all the planetsGPU to get a benchmark or looped in order to do stress testing for GPUs.  === Profiling Data Screenshots ===
Profile - [https://drive.google.com/open?id=== Profiling Data and Screenshots ===0B2Y_atB3DptbUG5oRWMyUGNQdlU Profile]
{| classHotspot Code - [https://drive.google.com/open?id="wikitable mw-collapsible mw-collapsed"! NBody Hot Functions|-| 0B2Y_atB3DptbRlhCUTNyeEFDbEk Hotspot Code]
<syntaxhighlight lang="cpp">----void dowork(double t){ int numtimes=int(abs(t= Introduction : GPU Benchmarking/dt)); dtTesting for NBody : Joshua Kraitberg ==t/double(numtimes+1); numtimes=numtimes+1; for This program uses Newtonian mechanics and a four-order symplectic Candy-Rozmus integration (int i=0;i<numtimes;i++a symplectic algorithm guarantees exact conservation of energy and angular momentum){ CRO_step. The initial conditions are obtained from JPL Horizons, ahd constants (dtlike masses,agravitational constant);are those recommended by the International Astronomical Union. The program currently does not take into account effects like general relativity, the non-spherical shapes of celestial objects, tidal effects on Earth, etc. It also does not take the 500 asteroids used by JPL Horizons into accound in its model of the Solar System. }} [https://github.com/fding/nbody Source]
void CRO_step(register double mydt,void (*a)()){ long double macr_a[4] = {0.5153528374311229364, -0.085782019412973646,0.4415830236164665242, 0.1288461583653841854}; long double macr_b[4] = {0.1344961992774310892, -0.2248198030794208058, 0.7563200005156682911, 0.3340036032863214255}; for (int i=0;i<4;i++){ a(); for (int jCompilation Instructions: =0;j<ncobjects;j++){ cobjects[j]->v += cobjects[j]->a * mydt*macr_b[i]; cobjects[j]->pos += cobjects[j]->v * mydt*macr_a[i]; } } //We should really expand the loop for efficiency}
void calculate_a(){For Unix/Linux based systems: for (int j1=0;j1<ncobjects;j1 g++){ cobjects[j1]->astd=vect(0,0,0); } for (int j1=0; j1<ncobjects;j1+c+){ for (int j2=j1+1;j2<ncobjects;j211 c++){/nbody.cpp double m1=cobjects[j1]->m; double m2=cobjects[j2]->m; vect dist=cobjects[j1]->pos-cobjects[j2]->pos; double magd=dist.mag(); vect baseObservations =dist*(1.0/(magd*magd*magd)); cobjects[j1]->a+=base*(-m2); cobjects[j2]->a+=base*m1; } }}</syntaxhighlight>
|}The program is quite fast for being a single-threaded CPU application. Almost all the CPU time is spent manipulating data and iterating in vectors.
{| class="wikitable mw-collapsible mw-collapsed"! NBody Hot Spot Data|-| Call graph (explanation follows)== Hotspot ===
Essentially all the time spent running is spent in the doing calculation on vectors. The dowork function iteratively calls the CRO_step function found in integrators.h file. The CRO_step function is where most of the vector calculations take place. A large amount of is also done in the calculate_a function which is used to calulate the acceleration on all the planets.
granularity: each sample hit covers 4 byte(s) for 0.16% of 6.18 seconds=== Profiling Data and Screenshots ===
index % time self children called name{| class="wikitable mw-collapsible mw-collapsed"! NBody Hot Functions|-|   <spontaneoussyntaxhighlight lang="cpp">[1] 99.7 0.00 6.16 main [1]void dowork(double t){ 0.00 6.15 1 int numtimes=int(abs(t/dt)); dt=t/double(numtimes+1 dowork(double) [3]; 0.00 0.01 numtimes=numtimes+1/1 totalL; for (int i=0;i<numtimes;i++) [14]{ 0.00 0.00 1/1 totalE( CRO_step(dt,a) [16]; }}   0.00 0.00 1/1 initializevoid CRO_step(register double mydt,void (*a)() ){ long double macr_a[174] = {0.00 5153528374311229364, -0.00 28/32712799 vect::operator-(vect const&) [8] 085782019412973646,0.00 4415830236164665242, 0.00 14/118268959 vect::operator*(1288461583653841854}; long double const&) macr_b[54] = {0.00 1344961992774310892, -0.00 14/5032775 vect::operator=(vect const&) [11] 02248198030794208058, 0.00 7563200005156682911, 0.00 42/42 std::vector<int, std::allocator3340036032863214255}; for (int i=0;i<4;i++){ a(); for (intj=0;j<ncobjects;j++){ cobjects[j]-> v += cobjects[j]->::operatora * mydt*macr_b[i](unsigned int) ; cobjects[j]->pos += cobjects[22j]->v * mydt*macr_a[i]; } 0.00 0.00 16 } /16 bool std::operator=/We should really expand the loop for efficiency}  void calculate_a(){ for (int j1=0;j1<char, std::char_traits<charncobjects;j1++){ cobjects[j1]->a=vect(0, std::allocator<char> >0,0); } for (std::basic_stringint j1=0; j1<char, std::char_traitsncobjects;j1++){ for (int j2=j1+1;j2<char>, std::allocator<charncobjects;j2++){ double m1=cobjects[j1]-> > const&, char const*) m; double m2=cobjects[33j2]->m; 0 vect dist=cobjects[j1]->pos-cobjects[j2]->pos; double magd=dist.00 0.00 15mag(); vect base=dist*(1.0/35 std::vector<int, std::allocator<int> >::size(magd*magd*magd)) const ; cobjects[23j1]->a+=base*(-m2); 0.00 0.00 14 cobjects[j2]->a+=base*m1; } }}</14 std::vector<int, std::allocator<int> syntaxhighlight|} {| class="wikitable mw-collapsible mw-collapsed"! NBody Hot Spot Data|-| Call graph (explanation follows)  granularity::push_backeach sample hit covers 4 byte(int const&s) [39] for 0.00 016% of 6.00 14/14 getobj(int) [36]18 seconds  0.00 index % time self children called name <spontaneous>[1] 99.7 0.00 3/3 std::vector<double, std::allocator<double> >::operator 6.16 main [1](unsigned int) [90] 0 0.00 06.00 15 21/2 1 print_hlinedowork(double) [943] 0.00 0.00 01 21/10 std::vector<double, std::allocator<double> >::size1 totalL() const [4514] 0.00 0.00 1/1 std::ios_base::precisiontotalE(int) [14616] 0.00 0.00 1/1 std::vector<double, std:initialize() [17] 0.00 0.00 28/32712799 vect:allocator<double> >::vectoroperator-(vect const&) [1428] 0.00 0.00 1 14/1 std118268959 vect::vector<int, std::allocator<int> >::vectoroperator*(double const&) [1445] 0.00 0.00 1 14/1 std5032775 vect::vector<double, std::allocator<double> >::push_back(double const&) operator=(vect const&) [14111] 0.00 0.00 1 42/1 42 std::vector<int, std::string, std::allocatorallocator<std::stringint> >::vectoroperator[](unsigned int) [13522] 0.00 0.00 1 16/1 16 bool std::vectoroperator==<char, std::stringchar_traits<char>, std::allocator<char> >(std::string> >basic_string<char, std::~vector(char_traits<char>, std::allocator<char> > const&, char const*) [13633] 0.00 0.00 1 15/1 JD35 std::vector<int, std::allocator<int> >::size(tm*) const [10323] 0.00 0.00 1 14/1 14 std::vector<doubleint, std::allocator<doubleint> >::push_back(double&int const&) [14039] 0.00 0.00 1 14/1 std::vector<int, std::allocator<14 getobj(int> >::~vector() [14536] 0.00 0.00 13/1 3 std::vector<double, std::allocator<double> >::~vectoroperator[](unsigned int) [14390]----------------------------------------------- 0.00 0.00 2/2 print_hline() [94] 0.00 0.14 6.01 8987000 2/89870 dowork(10 std::vector<double) [3][2] 99, std::allocator<double> >::size() const [45] 0.6 00 0.14 6.01 89870 CRO_step00 1/1 std::ios_base::precision(double, void (*)()int) [2] 1.18 4.22 359480/359480 calculate_a() [4146] 0.20 00 0.29 2013088000 1/118268959 vect1 std::vector<double, std::allocator<double> >::operator*vector(double const&) [5142] 0.12 00 0.00 10065440 1/75490814 vect1 std::vector<int, std::allocator<int> >::operator+=vector(vect const&) [7144]----------------------------------------------- 0 0.00 0.00 1/1 std::vector<double, std::allocator<double> >::push_back(double const&) [141] 0.00 60.15 00 1/1 main [1][3std::vector<std::string, std::allocator<std::string> >::vector() [135] 99 0.6 00 0.00 6.15 1 dowork(double) [3] 0.14 6.01 89870/89870 CRO_step(double, void (*)()) [2] 0.00 0.00 1/1 1 std::absvector<std::string, std::allocator<std::string> >::~vector(double) [147136]----------------------------------------------- 0.00 0.00 1/1 JD(tm*) [103] 0.00 0.00 1/1.18 4.22 359480/359480 CRO_step( std::vector<double, void (*)()std::allocator<double> >::push_back(double&&) [2140][4] 87 0.5 00 0.00 1.18 4.22 359480 calculate_a() [4] /1.00 1.39 98138040/118268959 vect std::operator*vector<int, std::allocator<int> >::~vector(double const&) [5145] 0.78 00 0.00 65425360 1/75490814 vect1 std::vector<double, std::allocator<double> >::operator+=~vector(vect const&) [7] 0.26 0.37 32712680/32712799 vect::operator-(vect const&) [8] 0.32 0.00 32712680/32712785 vect::mag() [10] 0.08 0.00 5032720/5032775 vect::operator=(vect const&) [11] 0.01 0.00 5032720/5032775 vect::vect(double, double, double) [13]----------------------------------------------- 0.00 0.00 11/118268959 initialize() [17] 0.00 0.00 14/118268959 main [1] 0.00 0.00 14/118268959 totalL() [14] 0.20 0.29 20130880/118268959 CRO_step(double, void (*)()) [2] 1.00 1.39 98138040/118268959 calculate_a() [4][5] 46.5 1.20 1.67 118268959 vect::operator*(double const&) [5] 1.67 0.00 118268959/118268959 vect::operator*=(double const&) [6]----------------------------------------------- 1.67 0.00 118268959/118268959 vect::operator*(double const&) [5][6] 27.1 1.67 0.00 118268959 vect::operator*=(double const&) [6]----------------------------------------------- 0.00 0.00 14/75490814 totalL() [14] 0.12 0.00 10065440/75490814 CRO_step(double, void (*)()) [2] 0.78 0.00 65425360/75490814 calculate_a() [4][7] 14.6 0.91 0.00 75490814 vect::operator+=(vect const&) [7143]
-----------------------------------------------
0.00 14 06.00 2801 89870/32712799 main 89870 dowork(double) [13] [2] 99.6 0.00 14 06.00 91/32712799 totalE01 89870 CRO_step(double, void (*)()) [162] 01.26 18 04.37 3271268022 359480/32712799 359480 calculate_a() [4][8] 10.4 0.27 20 0.38 32712799 29 20130880/118268959 vect::operator-*(vect double const&) [85] 0.38 12 0.00 3271279910065440/32712799 75490814 vect::operator-+=(vect const&) [97]
-----------------------------------------------
0.38 00 06.00 3271279915 1/32712799 vect::operator-(vect const&) 1 main [81][93] 99.6 0.00 6.15 1 dowork(double) [3] 0.14 6.01 89870/89870 CRO_step(double, void (*)()) [2] 0.38 00 0.00 32712799 vect 1/1 std::operator-=abs(vect const&double) [9147]
-----------------------------------------------
1.18 4.22 359480/359480 CRO_step(double, void (*)()) [2][4] 87.5 1.18 4.22 359480 calculate_a() [4] 1.00 1.39 98138040/118268959 vect::operator*(double const&) [5] 0.00 78 0.00 65425360/75490814 105vect::operator+=(vect const&) [7] 0.26 0.37 32712680/32712785 32712799 totalEvect::operator-(vect const&) [168] 0.32 0.00 32712680/32712785 calculate_avect::mag() [410] 0.08 0.00 5032720/5032775 vect::operator=(vect const&) [1011] 5.2 0.32 01 0.00 32712785 5032720/5032775 vect::magvect(double, double, double) [1013]
-----------------------------------------------
0.00 0.00 11/118268959 initialize() [17] 0.00 0.00 14/118268959 main [1] 0.00 0.00 14/118268959 totalL() [14] 0.20 0.29 20130880/118268959 CRO_step(double, void (*)()) [2] 1.00 1.39 98138040/118268959 calculate_a() [4][5] 46.5 1.20 1.67 118268959 vect::operator*(double const&) [5] 1.67 0.00 118268959/118268959 vect::operator*=(double const&) [6]----------------------------------------------- 1.67 0.00 118268959/118268959 vect::operator*(double const&) [5][6] 27.1 1.67 0.00 118268959 vect::operator*=(double const&) [6]----------------------------------------------- 0.00 0.00 14/75490814 totalL() [14] 0.12 0.00 10065440/75490814 CRO_step(double, void (*)()) [2] 0.78 0.00 65425360/75490814 calculate_a() [4][7] 14.6 0.91 0.00 75490814 vect::operator+=(vect const&) [7]----------------------------------------------- 0.00 0.00 28/32712799 main [1] 0.00 0.00 91/32712799 totalE() [16] 0.26 0.37 32712680/32712799 calculate_a() [4][8] 10.4 0.27 0.38 32712799 vect::operator-(vect const&) [8] 0.38 0.00 32712799/32712799 vect::operator-=(vect const&) [9]----------------------------------------------- 0.38 0.00 32712799/32712799 vect::operator-(vect const&) [8][9] 6.1 0.38 0.00 32712799 vect::operator-=(vect const&) [9]----------------------------------------------- 0.00 0.00 105/32712785 totalE() [16] 0.32 0.00 32712680/32712785 calculate_a() [4][10] 5.2 0.32 0.00 32712785 vect::mag() [10]----------------------------------------------- 0.00 0.00 14/5032775 main [1] 0.00 0.00 41/5032775 initialize() [17] 0.08 0.00 5032720/5032775 calculate_a() [4][11] 1.4 0.08 0.00 5032775 vect::operator=(vect const&) [11]
-----------------------------------------------
<spontaneous>
[[Image:F2RiP.gif|500px|thumb|alt=convolution pattern]]
[[Image:Img16.png|500px|thumb|alt=Plot of frequency response of the 2D Gaussian]]
===What is a Gaussian filter blurblurring?===
At a high level, Gaussian blurring works just like [https://en.wikipedia.org/wiki/Box_blur box blurring ] in that there is a weight per pixel and that for each pixel, you apply the weights to that pixel and it’s neighbors to come up<br/>
with the final value for the blurred pixel. It uses a convolution pattern which is a linear stencil that applies fixed weights to the elements of a neighborhood in the combination operation.
With true Gaussian blurring however, the function that defines the weights for each pixel technically never reaches zero, but gets smaller and smaller over distance. In theory, this makes a<br/>
Gaussian kernel infinitely large. In practice though, you can choose a cut-off point and call it good enoughset the bounds.
====The parameters to a Gaussian blur are:====
*Radius – The size of the kernel in pixels. The appropriate pixel size can be calculated for a specific sigma, but more information on that lower down.
Just like a box blur, a Gaussian blur is separable which means that you can either apply a 2D convolution kernel, or you can apply a 1D convolution kernel on each axis. Doing a single 2D convolution<br/>
means more calculations, but you only need one buffer to put the results into. Doing two 1D convolutions (one on each axis), ends up being fewer calculations, but requires two buffers to put the results<br/>
into (one intermediate buffer to hold the first axis results).
<br/>This kernel is useful for a two pass algorithm: First, perform a horizontal blur with the weights below and then perform a vertical blur on the resulting image (or vice versa).<br/>
 
Below is a 3×3 pixel 2D Gaussian Kernel also with a sigma of 1.0. Note that this can be calculated as an outer product (tensor product) of 1D kernels:
calculations doing multiple smaller blurs so is not usually worth while.
If you apply multiple blurs, the equivalent blur is the square root of the sum of the squares of the blur. Taking wikipedia’s [https://gooen.glwikipedia.org/g7Vjls wiki/Gaussian_blur example], if you applied a blur with radius 6 and a blur<br/>with a radius of 8, you’d end up with the equivelant of a radius 10 blur. This is because &radic; ( 6<sup>2</sup> + 8<sup>2</sup> ) = 10 
[[Image:Kernalweightperpixel.PNG|500px|thumb|alt=2D Gaussian]]
<h4>====Calculating The Kernel</h4>====
There are a couple ways to calculate a Gaussian kernel.
Where the sigma is your blur amount and x ranges across your values from the negative to the positive. For instance, if your kernel was 5 values, it would range from -2 to +2.
An even better way would be to integrate the Gaussian function instead of just taking point samples. Refer to the diagram two graphs on the right.<br/>The diagram plots graphs plot the continuous distribution function and the discrete kernel approximation. One thing to look out for are the tails of the distribution vs. kernel supportweight:<br/>
For the current configuration, we have 13.36% of the curve’s area outside the discrete kernel. Note that the weights are renormalized such that the sum of all weights is one. Or in other words:<br/>
the probability mass outside the discrete kernel is redistributed evenly to all pixels within the kernel. The weights are calculated by numerical integration of the continuous gaussian distribution<br/>
over each discrete kernel tap.
Whatever way you do it, make Make sure and to normalize the result so that the weights add up to 1. This makes sure that your blurring doesn’t make the image get brighter (greater than 1) or dimmer (less than 1).
====Calculating The Kernel Size====
Given a sigma value, you can calculate the size of the kernel you need by using this formula:1 + 2 &radic; ( -2&sigma;<sup>2</sup> ln 0.0005)
That formula makes a Kernel large enough such that it cuts off when the value in the kernel is less than 0.5%. You can adjust the number in there to higher or lower depending on your desires for<br/>
===Running the program===
 
====Code====
{| class="wikitable mw-collapsible mw-collapsed"
! Windows [https://goo.gl/aAUr6m source]- Gassusan Blur Filter Main (Visual Studio)
|-
|
<syntaxhighlight lang="cpp">
// Original example from https://goo.gl/aAUr6m
 
#include <iostream>
#include <stdio.h>
{| class="wikitable mw-collapsible mw-collapsed"
! Linux source - Gassusan Blur Filter Main (Command Line)
|-
|
<syntaxhighlight lang="cpp">
// Original example from https://goo.gl/aAUr6m
 
#include <iostream>
#include <stdio.h>
char *destFileName = argv[2];
#endif /* RUN_GROF RUN_GPROF */
if (showUsage)
{| class="wikitable mw-collapsible mw-collapsed"
! Linux source - Gassusan Blur Filter Header (Linux cannot use Windows API, replicated the required structs. Ref: MSDN[https://msdn.microsoft.com/en-us/library/windows/desktop/dd183374(v=vs.85).aspx 1][https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx 2])
|-
|
# Copy the Windows version of the main source code above and paste it into a [your chosen file name].cpp file.
# Go into you Debug properties of your project.
# Add four (4) values into the Debugging -> Command Arguments(outlined below)# Run in Release x64The command line arguments are structured as follows:
[input image filename].bmp [output image filename].bmp [x - sigma value] [y - sigmea value] => cinque_terre.bmp cinque_terre_BLURRED.bmp 3.0 3.0
 
====Linux====
To compile and run the program:
# Navigate to the directory you want to run the program in.
# Save [http://matrix.senecac.on.ca/~cpaul12/cinque_terre.bmp this] image and place it into the directory you will be running the program from.
# Copy the Linux version of the main source code above and paste it into a [your chosen file name].cpp file.
# Copy the Linux version of the header source code above and paste it into a file named windows.h.
Compile the binaries using the following command:
g++ -O2 -std=c++0x -Wall -pedantic [your chosen file name].cpp -o gblur
The command line arguments are structured as follows: [input image filename].bmp [output image filename].bmp [x - sigma value] [y - sigmea value]Run the compiled prigramprogram with the required arguments
./gblur cinque_terre.bmp cinque_terre_BLURRED.bmp 3.0 3.0
 
====Mac OS X====
To compile and run the program:
# Navigate to the directory you want to run the program in.
# Save [http://matrix.senecac.on.ca/~cpaul12/cinque_terre.bmp this] image and place it into the directory you will be running the program from.
# Copy the Linux version of the main source code above and paste it into a [your chosen file name].cpp file.
# Copy the Linux version of the header source code above and paste it into a file named windows.h.
Compile the binaries using the following command:
clang++ -O2 -std=c++0x -Wall -pedantic [your chosen file name].cpp -o gblur
The command line arguments are structured as follows:
[input image filename].bmp [output image filename].bmp [x - sigma value] [y - sigmea value]
Run the compiled program with the required arguments
./gblur cinque_terre.bmp cinque_terre_BLURRED.bmp 3.0 3.0
===Analysis===
}
// ...  } } }  { auto row = GaussianKernelIntegrals(yblursigma, yblursize);  // ...  for (int y = 0; y < destImage.m_height; ++y) { for (int x = 0; x < destImage.m_width; ++x) {  // ...  for (unsigned int i = 0; i < row.size(); ++i) { const uint8_t *pixel = GetPixelOrBlack(tmpImage, x, y + startOffset + i);  // ...  }  // ...  } } }}</syntaxhighlight> |}According to the Flat profile, 61.38% of the time is spent in the BlurImage function. This function contains a set of triply-nested for-loops which equates to a run-time of T(n) is O(n<sup>3</sup>).<br/>Referring to the Call graph we can see more supporting evidence that this application spends nearly all of its execution time in the BlurImage function. Therefore this function is the prime candidate<br/>for parallelization using CUDA. The sigma (&sigma;) and the kernel size can be increased in order to make the computation stressful on the GPU to get a significant benchmark. = Assignment 2/3 - Parallelize & Optimize =&#42; For gaussian blur we say it's unoptimized because we feel that there is more that can be done to reduce the execution times.<br/>&nbsp;&nbsp;The code displayed in the code snippets does use CUDA parallel constructs and fine tuning techniques such as streaming - async.== Gaussian Blur == {| class="wikitable mw-collapsible mw-collapsed"! Unoptimized* - BlurImage( ... )|-|<syntaxhighlight lang="cpp">#include <iostream>#include <stdio.h>#include <stdlib.h>#include <stdint.h>#include <array>#include <vector>#include <functional>#include <windows.h> // for bitmap headers.#include <algorithm>#include <chrono> #include <cuda_runtime.h>// to remove intellisense highlighting#include <device_launch_parameters.h>#include <device_functions.h> //#ifdef __CUDACC__//#if __CUDACC_VER_MAJOR__ == 1//const int ntpb = 512;//#else//const int ntpb = 1024;//#endif//#endifconst int ntpb = 1024;const int STREAMS = 32; void check(cudaError_t error) { if (error != cudaSuccess) { throw std::exception(cudaGetErrorString(error)); }} struct SImageData{ SImageData() : m_width(0) , m_height(0) { }  long m_width; long m_height; long m_pitch; std::vector<uint8_t> m_pixels;}; void WaitForEnter(){ char c; std::cout << "Press Enter key to exit ... "; std::cin.get(c);} bool LoadImage(const char *fileName, SImageData& imageData){ // open the file if we can FILE *file; file = fopen(fileName, "rb"); if (!file) return false;  // read the headers if we can BITMAPFILEHEADER header; BITMAPINFOHEADER infoHeader; if (fread(&header, sizeof(header), 1, file) != 1 || fread(&infoHeader, sizeof(infoHeader), 1, file) != 1 || header.bfType != 0x4D42 || infoHeader.biBitCount != 24) { fclose(file); return false; }  // read in our pixel data if we can. Note that it's in BGR order, and width is padded to the next power of 4 imageData.m_pixels.resize(infoHeader.biSizeImage); fseek(file, header.bfOffBits, SEEK_SET); if (fread(&imageData.m_pixels[0], imageData.m_pixels.size(), 1, file) != 1) { fclose(file); return false; }  imageData.m_width = infoHeader.biWidth; imageData.m_height = infoHeader.biHeight;  imageData.m_pitch = imageData.m_width * 3; if (imageData.m_pitch & 3) { imageData.m_pitch &= ~3; imageData.m_pitch += 4; }  fclose(file); return true;} bool SaveImage(const char *fileName, const SImageData &image){ // open the file if we can FILE *file; file = fopen(fileName, "wb"); if (!file) return false;  // make the header info BITMAPFILEHEADER header; BITMAPINFOHEADER infoHeader;  header.bfType = 0x4D42; header.bfReserved1 = 0; header.bfReserved2 = 0; header.bfOffBits = 54;  infoHeader.biSize = 40; infoHeader.biWidth = image.m_width; infoHeader.biHeight = image.m_height; infoHeader.biPlanes = 1; infoHeader.biBitCount = 24; infoHeader.biCompression = 0; infoHeader.biSizeImage = image.m_pixels.size(); infoHeader.biXPelsPerMeter = 0; infoHeader.biYPelsPerMeter = 0; infoHeader.biClrUsed = 0; infoHeader.biClrImportant = 0;  header.bfSize = infoHeader.biSizeImage + header.bfOffBits;  // write the data and close the file fwrite(&header, sizeof(header), 1, file); fwrite(&infoHeader, sizeof(infoHeader), 1, file); fwrite(&image.m_pixels[0], infoHeader.biSizeImage, 1, file); fclose(file); return true;} int PixelsNeededForSigma(float sigma){ // returns the number of pixels needed to represent a gaussian kernal that has values // down to the threshold amount. A gaussian function technically has values everywhere // on the image, but the threshold lets us cut it off where the pixels contribute to // only small amounts that aren't as noticeable. const float c_threshold = 0.005f; // 0.5% return int(floor(1.0f + 2.0f * sqrtf(-2.0f * sigma * sigma * log(c_threshold)))) + 1;} float Gaussian(float sigma, float x){ return expf(-(x*x) / (2.0f * sigma*sigma));} float GaussianSimpsonIntegration(float sigma, float a, float b){ return ((b - a) / 6.0f) * (Gaussian(sigma, a) + 4.0f * Gaussian(sigma, (a + b) / 2.0f) + Gaussian(sigma, b));} std::vector<float> GaussianKernelIntegrals(float sigma, int taps){ std::vector<float> ret; float total = 0.0f; for (int i = 0; i < taps; ++i) { float x = float(i) - float(taps / 2); float value = GaussianSimpsonIntegration(sigma, x - 0.5f, x + 0.5f); ret.push_back(value); total += value; } // normalize it for (unsigned int i = 0; i < ret.size(); ++i) { ret[i] /= total; } return ret;} struct BGRPixel { float b; float g; float r;}; __global__ void blur_kernel(BGRPixel* imageIn, BGRPixel* imageOut, float* blur, int n_blur, int x, int start, int jump) { int idx = blockDim.x*blockIdx.x + threadIdx.x; // Location on the row  if (idx < x) { int id = start + idx; int bstart = id - (n_blur / 2)*jump;  BGRPixel pixel{ 0.0f, 0.0f, 0.0f };  for (int i = 0; i < n_blur; ++i) { int bid = bstart + i*jump; float iblur = blur[i];  pixel.b += imageIn[bid].b * iblur; pixel.g += imageIn[bid].g * iblur; pixel.r += imageIn[bid].r * iblur; }  imageOut[id].b = pixel.b; imageOut[id].g = pixel.g; imageOut[id].r = pixel.r; }} void BlurImage(const SImageData& srcImage, SImageData &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize){ int xImage = srcImage.m_width; // Width of image int yImage = srcImage.m_height; // Height of image int imageSize = xImage*yImage;  int xPadded = xImage + (xblursize - 1); // Width including padding int yPadded = yImage + (yblursize - 1); // Height including padding int paddedSize = xPadded*yPadded;  int xPad = xblursize / 2; // Number of padding columns on each side int yPad = yblursize / 2; int padOffset = xPadded*yPad + xPad; // Offset to first pixel in padded image  float* pinnedImage = nullptr; BGRPixel* d_padded1 = nullptr; BGRPixel* d_padded2 = nullptr;  float* d_xblur = nullptr; // XBlur integrals int n_xblur; // N  float* d_yblur = nullptr; // YBlur integrals int n_yblur; // N  // Allocate memory for host and device check(cudaHostAlloc((void**)&pinnedImage, 3 * imageSize * sizeof(float), 0)); check(cudaMalloc((void**)&d_padded1, paddedSize * sizeof(BGRPixel))); check(cudaMalloc((void**)&d_padded2, paddedSize * sizeof(BGRPixel)));  // Copy image to pinned memory for (int i = 0; i < 3 * imageSize; ++i) { pinnedImage[i] = (float)srcImage.m_pixels[i]; }  // Allocate and assign intergrals { auto row_blur = GaussianKernelIntegrals(xblursigma, xblursize); auto col_blur = GaussianKernelIntegrals(yblursigma, yblursize);  // ROW n_xblur = row_blur.size(); check(cudaMalloc((void**)&d_xblur, n_xblur * sizeof(float))); check(cudaMemcpy(d_xblur, row_blur.data(), n_xblur * sizeof(float), cudaMemcpyHostToDevice));  // COLUMN n_yblur = col_blur.size(); check(cudaMalloc((void**)&d_yblur, n_yblur * sizeof(float))); check(cudaMemcpy(d_yblur, col_blur.data(), n_yblur * sizeof(float), cudaMemcpyHostToDevice)); }   cudaStream_t stream[STREAMS];  int nblks = (xImage + (ntpb - 1)) / ntpb;  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamCreate(&stream[i])); }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { cudaMemcpyAsync(d_padded1 + padOffset + i*xPadded, pinnedImage + (3 * i*xImage), 3 * xImage * sizeof(float), cudaMemcpyHostToDevice, stream[j]); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded1, d_padded2, d_xblur, n_xblur, xImage, padOffset + i*xPadded, 1); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded2, d_padded1, d_yblur, n_yblur, xImage, padOffset + i*xPadded, xPadded); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { check(cudaMemcpyAsync(pinnedImage + (3 * i*xImage), d_padded1 + padOffset + i*xPadded, xImage * sizeof(BGRPixel), cudaMemcpyDeviceToHost, stream[j])); } }  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamSynchronize(stream[i])); check(cudaStreamDestroy(stream[i])); }  destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_pitch; destImage.m_pixels.resize(srcImage.m_pixels.size());  for (int i = 0; i < 3 * imageSize; i++) { destImage.m_pixels[i] = (uint8_t)pinnedImage[i]; };  check(cudaFree(d_xblur)); check(cudaFree(d_yblur));  check(cudaFreeHost(pinnedImage)); check(cudaFree(d_padded1)); check(cudaFree(d_padded2));  check(cudaDeviceReset());} int main(int argc, char **argv){ float xblursigma, yblursigma;  bool showUsage = argc < 5 || (sscanf(argv[3], "%f", &xblursigma) != 1) || (sscanf(argv[4], "%f", &yblursigma) != 1);  char *srcFileName = argv[1]; char *destFileName = argv[2];  if (showUsage) { printf("Usage: <source> <dest> <xblur> <yblur>\nBlur values are sigma\n\n"); WaitForEnter(); return 1; }  // calculate pixel sizes, and make sure they are odd int xblursize = PixelsNeededForSigma(xblursigma) | 1; int yblursize = PixelsNeededForSigma(yblursigma) | 1;  printf("Attempting to blur a 24 bit image.\n"); printf(" Source=%s\n Dest=%s\n blur=[%0.1f, %0.1f] px=[%d,%d]\n\n", srcFileName, destFileName, xblursigma, yblursigma, xblursize, yblursize);  SImageData srcImage; if (LoadImage(srcFileName, srcImage)) { printf("%s loaded\n", srcFileName); SImageData destImage;  auto t1 = std::chrono::high_resolution_clock::now(); BlurImage(srcImage, destImage, xblursigma, yblursigma, xblursize, yblursize); auto t2 = std::chrono::high_resolution_clock::now();  std::cout << "BlurImage time: " << std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count() << "us" << std::endl;   if (SaveImage(destFileName, destImage)) printf("Blurred image saved as %s\n", destFileName); else { printf("Could not save blurred image as %s\n", destFileName); WaitForEnter(); return 1; } } else { printf("could not read 24 bit bmp file %s\n\n", srcFileName); WaitForEnter(); return 1; } return 0;}</syntaxhighlight> |} == Objectives ==The main objective was to not change the main function. This objective was met, although code had to be added for profiling. == Steps ===== Host Memory Management ===In the original program a bmp is loaded into an vector of uint8_t. This is not ideal for CUDA, therefore an array of pinned memory was allocated. This array contains the same amount of elements but stores them as a structure, "BGRPixel" which is three contiguous floats. The vector is then transferred over to pinned memory.{| class="wikitable mw-collapsible mw-collapsed"! Host Memory Management - Code( ... )|-|<syntaxhighlight lang="cpp">struct SImageData{ SImageData() : m_width(0) , m_height(0) { }  long m_width; long m_height; long m_pitch; std::vector<uint8_t> m_pixels;}; struct BGRPixel { float b; float g; float r;};  void BlurImage(const SImageData& srcImage, SImageData &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize){ int xImage = srcImage.m_width; // Width of image int yImage = srcImage.m_height; // Height of image int imageSize = xImage*yImage;  int xPadded = xImage + (xblursize - 1); // Width including padding int yPadded = yImage + (yblursize - 1); // Height including padding int paddedSize = xPadded*yPadded;  int xPad = xblursize / 2; // Number of padding columns on each side int yPad = yblursize / 2; int padOffset = xPadded*yPad + xPad; // Offset to first pixel in padded image  float* pinnedImage = nullptr; BGRPixel* d_padded1 = nullptr; BGRPixel* d_padded2 = nullptr;  // ...  // Allocate memory for host and device check(cudaHostAlloc((void**)&pinnedImage, 3 * imageSize * sizeof(float), 0)); check(cudaMalloc((void**)&d_padded1, paddedSize * sizeof(BGRPixel))); check(cudaMalloc((void**)&d_padded2, paddedSize * sizeof(BGRPixel)));  // Copy image to pinned memory for (int i = 0; i < 3 * imageSize; ++i) { pinnedImage[i] = (float)srcImage.m_pixels[i]; }  // ...}</syntaxhighlight> |} === Device Memory Management ===To get a blurred pixel the surrounding pixels must be sampled, in some cases this means sampling pixels outside the bounds of the image. In the original, a simple if check was used to determine if the pixel was outside the bounds or the image, if it was a black pixel was returned instead. This if statement most likely would have caused massive thread divergence in a kernel, therefore the images created in device memory featured additional padding of black pixels to compensate for this. Two such images were created, one to perform horizontal blur and one to perform vertical blur. Other small device arrays were also needed to store the Gaussian integrals that are used to produce the blurring effect.<br>{| class="wikitable mw-collapsible mw-collapsed"! Padding example|-|  <div style="display:inline;">[[File:shrunk.png]]</div><div style="display:inline;">[[File:pad.png]]</div><br>This is how the image would be padded for 3x3 sigma blur. The original image is 2560x1600 -> 11.7MB With blur sigmas [x = 3, y = 3] and conversion to float the padded images will be 2600x1640 -> 48.8MB Increase of 4.1% pixels and with the conversion for uint8_t to float total increase of 317% in memory requirements on the GPU Since two padded images are needed at least 97.6MB will be on the GPU |} === Host to Device ===To copy the pinned image to the device an array of streams was used to asynchronously copy each row of the image over. Doing so allowed the rows to be easily copied over while avoiding infringing on the extra padding pixels.=== Kernels ===First one image is blurred horizontally. One image is used as a reference while the other is written to. Kernels are also executed using the streams, so that each stream will blur a single row at a time. After the horizontal blur is finished the vertical blur is launched in the same manner, except that the previously written to image is used as a reference while the previous reference is now written to. The two blur are able to use the same kernel due to the fact that the pixel sampling technique works by iterating through pixels because of this the step size can be changed to sample across the row or down the column. === Device to Host ===After that is done the image is copied back using the streams in the same way it was copied over.=== Code === {| class="wikitable mw-collapsible mw-collapsed"! Unoptimized* - BlurImage -- Exert( ... )|-|<syntaxhighlight lang="cpp">const int ntpb = 1024;const int STREAMS = 32; void check(cudaError_t error) { if (error != cudaSuccess) { throw std::exception(cudaGetErrorString(error)); }} struct SImageData{ SImageData() : m_width(0) , m_height(0) { }  long m_width; long m_height; long m_pitch; std::vector<uint8_t> m_pixels;}; float Gaussian(float sigma, float x){ return expf(-(x*x) / (2.0f * sigma*sigma));} float GaussianSimpsonIntegration(float sigma, float a, float b){ return ((b - a) / 6.0f) * (Gaussian(sigma, a) + 4.0f * Gaussian(sigma, (a + b) / 2.0f) + Gaussian(sigma, b));} std::vector<float> GaussianKernelIntegrals(float sigma, int taps){ std::vector<float> ret; float total = 0.0f; for (int i = 0; i < taps; ++i) { float x = float(i) - float(taps / 2); float value = GaussianSimpsonIntegration(sigma, x - 0.5f, x + 0.5f); ret.push_back(value); total += value; } // normalize it for (unsigned int i = 0; i < ret.size(); ++i) { ret[i] /= total; } return ret;} struct BGRPixel { float b; float g; float r;}; __global__ void blur_kernel(BGRPixel* imageIn, BGRPixel* imageOut, float* blur, int n_blur, int x, int start, int jump) { int idx = blockDim.x*blockIdx.x + threadIdx.x; // Location on the row  if (idx < x) { int id = start + idx; int bstart = id - (n_blur / 2)*jump;  BGRPixel pixel{ 0.0f, 0.0f, 0.0f };  for (int i = 0; i < n_blur; ++i) { int bid = bstart + i*jump; float iblur = blur[i];  pixel.b += imageIn[bid].b * iblur; pixel.g += imageIn[bid].g * iblur; pixel.r += imageIn[bid].r * iblur; }  imageOut[id].b = pixel.b; imageOut[id].g = pixel.g; imageOut[id].r = pixel.r; }} void BlurImage(const SImageData& srcImage, SImageData &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize){ int xImage = srcImage.m_width; // Width of image int yImage = srcImage.m_height; // Height of image int imageSize = xImage*yImage;  int xPadded = xImage + (xblursize - 1); // Width including padding int yPadded = yImage + (yblursize - 1); // Height including padding int paddedSize = xPadded*yPadded;  int xPad = xblursize / 2; // Number of padding columns on each side int yPad = yblursize / 2; int padOffset = xPadded*yPad + xPad; // Offset to first pixel in padded image  float* pinnedImage = nullptr; BGRPixel* d_padded1 = nullptr; BGRPixel* d_padded2 = nullptr;  float* d_xblur = nullptr; // XBlur integrals int n_xblur; // N  float* d_yblur = nullptr; // YBlur integrals int n_yblur; // N  // Allocate memory for host and device check(cudaHostAlloc((void**)&pinnedImage, 3 * imageSize * sizeof(float), 0)); check(cudaMalloc((void**)&d_padded1, paddedSize * sizeof(BGRPixel))); check(cudaMalloc((void**)&d_padded2, paddedSize * sizeof(BGRPixel)));  // Copy image to pinned memory for (int i = 0; i < 3 * imageSize; ++i) { pinnedImage[i] = (float)srcImage.m_pixels[i]; }  // Allocate and assign intergrals { auto row_blur = GaussianKernelIntegrals(xblursigma, xblursize); auto col_blur = GaussianKernelIntegrals(yblursigma, yblursize);  // ROW n_xblur = row_blur.size(); check(cudaMalloc((void**)&d_xblur, n_xblur * sizeof(float))); check(cudaMemcpy(d_xblur, row_blur.data(), n_xblur * sizeof(float), cudaMemcpyHostToDevice));  // COLUMN n_yblur = col_blur.size(); check(cudaMalloc((void**)&d_yblur, n_yblur * sizeof(float))); check(cudaMemcpy(d_yblur, col_blur.data(), n_yblur * sizeof(float), cudaMemcpyHostToDevice)); }   cudaStream_t stream[STREAMS];  int nblks = (xImage + (ntpb - 1)) / ntpb;  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamCreate(&stream[i])); }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { cudaMemcpyAsync(d_padded1 + padOffset + i*xPadded, pinnedImage + (3 * i*xImage), 3 * xImage * sizeof(float), cudaMemcpyHostToDevice, stream[j]); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded1, d_padded2, d_xblur, n_xblur, xImage, padOffset + i*xPadded, 1); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded2, d_padded1, d_yblur, n_yblur, xImage, padOffset + i*xPadded, xPadded); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { check(cudaMemcpyAsync(pinnedImage + (3 * i*xImage), d_padded1 + padOffset + i*xPadded, xImage * sizeof(BGRPixel), cudaMemcpyDeviceToHost, stream[j])); } }  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamSynchronize(stream[i])); check(cudaStreamDestroy(stream[i])); }  destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_pitch; destImage.m_pixels.resize(srcImage.m_pixels.size());  for (int i = 0; i < 3 * imageSize; i++) { destImage.m_pixels[i] = (uint8_t)pinnedImage[i]; };  check(cudaFree(d_xblur)); check(cudaFree(d_yblur));  check(cudaFreeHost(pinnedImage)); check(cudaFree(d_padded1)); check(cudaFree(d_padded2));  check(cudaDeviceReset());} </syntaxhighlight> |} == Results ==Obtained using Quadro K620<br>[[File:uvso2.png]][[File:usession.png]][[File:ktimes.png]]<br>Using a Quadro K2000<br>[[File:streams.png]] == Output Images ==[http://imgur.com/a/CtMOc Image Gallery][https://seneca-my.sharepoint.com/personal/jkraitberg_myseneca_ca/_layouts/15/guestaccess.aspx?docid=099a13c42168943b587de4b59e4634e06&authkey=Afl_iMqjNyFhoYu3bopOw5E 135MB Image][https://seneca-my.sharepoint.com/personal/jkraitberg_myseneca_ca/_layouts/15/guestaccess.aspx?docid=007880dac1dd74d09b74fc448dc3fac38&authkey=AdqHCKEjZCXzlyftjZWxFCA 135MB 3x3 Result] == Mandelbrot =={| class="wikitable mw-collapsible mw-collapsed"! Unoptimized - Mandelbrot( ... )|-|<syntaxhighlight lang="cpp">//C++ Includes#include <iostream>#include <complex>#include <vector>#include <chrono>#include <functional>#include <cuda_runtime.h> //CUDA Complex Numbers#include <cuComplex.h> //Helper Includes#include "window.h"#include "save_image.h"#include "utils.h" const int ntpb = 32; //Compute Color for each pixel__global__ void computeMandelbrot( int iter_max, int* d_colors, int fract_width, int fract_height, int scr_width, int scr_height, int fract_xmin, int fract_ymin){ int row = blockIdx.y * blockDim.y + threadIdx.y; //Row int col = blockIdx.x * blockDim.x + threadIdx.x; //Col  int idx = row * scr_width + col; //Pixel Index  if(col < scr_width && row < scr_height){  //Use Floating Complex Numbers to calculate color for each pixel int result = 0; cuFloatComplex c = make_cuFloatComplex((float)col, (float)row); cuFloatComplex d = make_cuFloatComplex(cuCrealf(c) / (float)scr_width * fract_width + fract_xmin , cuCimagf(c) / (float)scr_height * fract_height + fract_ymin); cuFloatComplex z = make_cuFloatComplex(0.0f, 0.0f);  while((cuCabsf(z) < 2.0f) && (result < iter_max)){ z = (cuCaddf(cuCmulf(z,z),d)); result++; } d_colors[idx] = result; //Output }} void mandelbrot(){ window<int> scr(0, 1000, 0, 1000); //Image Size window<float> fract(-2.2,1.2,-1.7,1.7); //Fractal Size int iter_max = 500; //Iterations const char* fname = "mandlebrot_gpu.png"; //Output File Name bool smooth_color = true; //Color Smoothing  int nblks = (scr.width() + ntpb - 1)/ ntpb; //Blocks std::vector<int> colors(scr.size()); //Output Vector //Allocate Device Memory int* d_colors; cudaMalloc((void**)&d_colors, scr.size() * sizeof(int));  //Grid Layout dim3 dGrid(nblks, nblks); dim3 dBlock(ntpb, ntpb);  //Execute Kernel auto start = std::chrono::steady_clock::now(); computeMandelbrot<<<dGrid, dBlock>>>(iter_max, d_colors, fract.width(), fract.height(), scr.width(), scr.height(), fract.x_min(), fract.y_min()); cudaDeviceSynchronize(); auto end = std::chrono::steady_clock::now();  //Output Time std::cout << "Time to generate " << fname << " = " << std::chrono::duration <float, std::milli> (end - start).count() << " [ms]" << std::endl;  //Copy Data back to Host cudaMemcpy(colors.data(), d_colors, scr.size() * sizeof(int), cudaMemcpyDeviceToHost);  //Plot Data and Free Memory plot(scr, colors, iter_max, fname, smooth_color); cudaFree(d_colors);} int main(){ mandelbrot(); return 0;}</syntaxhighlight>|}
{=== Objectives === auto row = GaussianKernelIntegralsThe main objective was refactor the get_number_iterations(yblursigma) function and the subsequent functions called that created the nested loops. The objective was met as all the functions were refactored into a single device function that did the calculation for a single pixel of the image. As the original program was done with doubles, yblursize);all of the doubles were changed to floats.
// ...  for (int y = 0; y < destImage.m_height; ++y) { for (int x = 0; x < destImage.m_width; ++x) {  // ...= Steps ===
for (unsigned int i = 0; i < row== Host Memory Management ===No changes were needed to the Host Memory as no data is copied from the host to the device. The vector on the host that contains the data was not changed and data from the device was copied to this vector to be output the plot file.size(); ++i) { const uint8_t *pixel = GetPixelOrBlack(tmpImage, x, y + startOffset + i);
// .=== Device Memory Management ===Only a single array to hold the value for each pixel was created on the device.This array has a size of image width * image height and the row and columns for each image are calculated from this which are used in the complex number calculations along with the values that specify the parameters of the fractal.
}=== Kernels ===The three functions from the original code ( get_number_iterations() , escape() and scale() were refactored into a single computeMandelbrot() function. The device kernel calculates the row and column for the pixel and then uses the row and colmn values along with the picture width and fractal parameters to calculate the value. Complex floating point numbers are used using the cuComplex.h header file which also includes the operations for the complex numbers as well. As threads are not reliant on each other for any data, no use of __syncthreads() is required. As threads complete computing the values, they output the value to the d_colors array.
// ..=== Device to Host ===After that is done the image is copied back using a single memcpy to the host.
}=== Results === }The program was compiled using clang++ , icpc (Intel Parallel Studio Compiler) and NVCC for the GPU. Runtimes for the standard clang++ version were extremely slow as the size of the resultant image increased. Compiling the program using the icpc compiler brought in significant changes without modifying any code and reduced runtimes drastically for running purely on a CPU. Using the parallel version based on CUDA improved the runtime massively over the clang++ compiled version and even the icpc version as more values could be calculated in parallel. }}</syntaxhighlight>[[Image:Mandelbrot.png | 750px]]
|}=== Output Images ===According to the Flat profile, 61.38% of the time is spent in the BlurImage function. This function contains a set of triply-nested for-loops which equates to a run-time of T(n) is O(n<sup>3<[http:/sup>).<br/>Referring to the Call graph we can see more supporting evidence that this application spends nearly all of its execution time in the BlurImage functionimgur. Therefore this function is the prime candidate<brcom/>for parallelization using CUDA. The sigma (&sigma;) and the kernel size can be increased in order to make the computation stressful on the GPU to get a significant benchmark./R3ZAH Image Output]
= Assignment 2 - Parallelize == Assignment 3 - Optimize Future Optimizations ===As there isn't any data intensive tasks in this program, further optimizations would include creating streams of kernels and having them execute concurrently in order to improve runtime of the current solution.
147
edits