Open main menu

CDOT Wiki β

Changes

BETTERRED

11,251 bytes added, 20:30, 12 April 2017
Code
The program can then be executed by running the compiled binary and it will display the time it took to generate the Mandelbrot set and save the pictures.
{| class="wikitable mw-collapsible mw-collapsed"! Mandelbrot CPU( ... )|-|<syntaxhighlight lang== Observations ==="cpp">#include <iostream>#include <complex>#include <vector>#include <chrono>#include <functional>
The program takes a significant amount of time to run as the calculations are being done on the CPU#include "window. There are nested loops present within the program that can be parallelized to make the program fasterh"#include "save_image.h"#include "utils.h"
The code also has the size of the image and the iterations hard// clang++ -coded which can be modified to make the program significantly longer to process and make it tough on the GPU's for benchmarking and stability testing by running the process in a loopstd=c++11 -stdlib=libc++ -O3 save_image.cpp utils. The code is relatively straight forward and the parallelization should also be easy to implement and testcpp mandel.cpp -lfreeimage
// Use an alias to simplify the use of complex type
using Complex = std::complex<float>;
=== Hotspot ===// Convert a pixel coordinate to the complex domainComplex scale(window<int> &scr, window<float> &fr, Complex c) { Complex aux(c.real() / (float)scr.width() * fr.width() + fr.x_min(), c.imag() / (float)scr.height() * fr.height() + fr.y_min()); return aux;}
Hotspot for // Check if a point is in the program was found in set or escapes to infinity, return the fractalnumber if iterationsint escape() Complex c, int iter_max, const std::function which calls the get_iterations<Complex(Complex, Complex) function that contains 2-nested for loops and a call to escape(> &func) which contains a while loop. Profiling the runtime with Instruments on OSX displayed that the fractal{ Complex z(0) function took up the most amount of runtime and this is the function that will be parallelized using CUDA. Once the function is parallelized, the iterations and size of the image can be increased in order to make the computation relatively stressful on the GPU to get a benchmark or looped in order to do stress testing for GPUs.; int iter = 0;
while (abs(z) < 2.0 && iter < iter_max) {
z = func(z, c);
iter++;
}
return iter;
}
// Loop over each pixel from our image and check if the points associated with this pixel escape to infinityvoid get_number_iterations(window<int> &scr, window<float> &fract, int iter_max, std::vector<int> &colors, const std::function<Complex( Complex, Complex)> &func) { int k =0, progress =-1; for(int i = Profiling Data Screenshots scr.y_min(); i < scr.y_max(); ++i) { for(int j =scr.x_min(); j < scr.x_max(); ++j) { Complex c((float)j, (float)i); c =scale(scr, fract, c); colors[k] =escape(c, iter_max, func); k++; } if(progress < (int)(i*100.0/scr.y_max())){ progress = (int)(i*100.0/scr.y_max()); std::cout << progress << "%\n"; } }}
Profile void fractal(window<int> &scr, window<float> &fract, int iter_max, std::vector<int> &colors, const std::function<Complex( Complex, Complex)> &func, const char *fname, bool smooth_color) { auto start = std::chrono::steady_clock::now(); get_number_iterations(scr, fract, iter_max, colors, func); auto end = std::chrono::steady_clock::now(); std::cout << "Time to generate " << fname << " = " << std::chrono::duration <float, std::milli> (end - start).count() << " [httpsms]" << std:://drive.google.com/open?id=0B2Y_atB3DptbUG5oRWMyUGNQdlU Profile]endl;
Hotspot Code - [https: //drive.google.com/open?id=0B2Y_atB3DptbRlhCUTNyeEFDbEk Hotspot Code]Save (show) the result as an image plot(scr, colors, iter_max, fname, smooth_color);}
void mandelbrot() { // Define the size of the image window<int> scr(0, 1000, 0, 1000); // The domain in which we test for points window<float> fract(-2.2, 1.2, ---1.7, 1.7);
== Introduction : GPU Benchmarking //Testing for NBody : Joshua Kraitberg =The function used to calculate the fractal auto func =[] (Complex z, Complex c) -> Complex {return z * z + c; };
This program uses Newtonian mechanics and a four-order symplectic Candy-Rozmus integration int iter_max = 500; const char *fname = "mandelbrot.png"; bool smooth_color = true; std::vector<int> colors(a symplectic algorithm guarantees exact conservation of energy and angular momentum)scr. The initial conditions are obtained from JPL Horizons, ahd constants size(like masses, gravitational constant) are those recommended by the International Astronomical Union. The program currently does not take into account effects like general relativity, the non-spherical shapes of celestial objects, tidal effects on Earth, etc. It also does not take the 500 asteroids used by JPL Horizons into accound in its model of the Solar System.);
[https: //githubExperimental zoom (bugs ?).comThis will modify the fract window (the domain in which we calculate the fractal function) /fding/nbody Source]zoom(1.0, -1.225, -1.22, 0.15, 0.16, fract); //Z2 fractal(scr, fract, iter_max, colors, func, fname, smooth_color);}
=== Compilation Instructions: ===void triple_mandelbrot() { // Define the size of the image window<int> scr(0, 2000, 0, 2000); // The domain in which we test for points window<float> fract(-1.5, 1.5, -1.5, 1.5);
For Unix /Linux based systems/ The function used to calculate the fractal auto func = [] (Complex z, Complex c) -> Complex {return z * z * z + c; };  int iter_max = 500; const char *fname = "triple_mandelbrot.png"; bool smooth_color = true; std::vector<int> colors(scr.size());  fractal(scr, fract, iter_max, colors, func, fname, smooth_color);} int main() {  mandelbrot(); // triple_mandelbrot();  return 0;}
g++ -std=c++11 c++</nbody.cppsyntaxhighlight>|}
=== Observations ===
The program is quite fast for takes a significant amount of time to run as the calculations are being a single-threaded done on the CPU application. Almost all There are nested loops present within the CPU time program that can be parallelized to make the program faster. The code also has the size of the image and the iterations hard-coded which can be modified to make the program significantly longer to process and make it tough on the GPU's for benchmarking and stability testing by running the process in a loop. The code is spent manipulating data relatively straight forward and iterating in vectorsthe parallelization should also be easy to implement and test
=== Hotspot ===
Essentially all Hotspot for the time spent running is spent program was found in the doing calculation on vectors. The dowork fractal() function iteratively which calls the CRO_step get_iterations() function found in integratorsthat contains 2-nested for loops and a call to escape() which contains a while loop.h file. The CRO_step Profiling the runtime with Instruments on OSX displayed that the fractal() function is where took up the most amount of runtime and this is the vector calculations take placefunction that will be parallelized using CUDA. A large amount of is also done in Once the calculate_a function which is used parallelized, the iterations and size of the image can be increased in order to calulate make the acceleration computation relatively stressful on all the planetsGPU to get a benchmark or looped in order to do stress testing for GPUs.  === Profiling Data Screenshots ===
Profile - [https://drive.google.com/open?id=== Profiling Data and Screenshots ===0B2Y_atB3DptbUG5oRWMyUGNQdlU Profile]
{| classHotspot Code - [https://drive.google.com/open?id="wikitable mw-collapsible mw-collapsed"! NBody Hot Functions|-| 0B2Y_atB3DptbRlhCUTNyeEFDbEk Hotspot Code]
---- == Introduction : GPU Benchmarking/Testing for NBody : Joshua Kraitberg == This program uses Newtonian mechanics and a four-order symplectic Candy-Rozmus integration (a symplectic algorithm guarantees exact conservation of energy and angular momentum). The initial conditions are obtained from JPL Horizons, ahd constants (like masses, gravitational constant) are those recommended by the International Astronomical Union. The program currently does not take into account effects like general relativity, the non-spherical shapes of celestial objects, tidal effects on Earth, etc. It also does not take the 500 asteroids used by JPL Horizons into accound in its model of the Solar System. [https://github.com/fding/nbody Source] === Compilation Instructions: === For Unix/Linux based systems:  g++ -std=c++11 c++/nbody.cpp === Observations === The program is quite fast for being a single-threaded CPU application. Almost all the CPU time is spent manipulating data and iterating in vectors. === Hotspot === Essentially all the time spent running is spent in the doing calculation on vectors. The dowork function iteratively calls the CRO_step function found in integrators.h file. The CRO_step function is where most of the vector calculations take place. A large amount of is also done in the calculate_a function which is used to calulate the acceleration on all the planets. === Profiling Data and Screenshots === {| class="wikitable mw-collapsible mw-collapsed"! NBody Hot Functions|-|  <syntaxhighlight lang="cpp">void dowork(double t){ int numtimes=int(abs(t/dt)); dt=t/double(numtimes+1); numtimes=numtimes+1;
for (int i=0;i<numtimes;i++){
CRO_step(dt,a);
An even better way would be to integrate the Gaussian function instead of just taking point samples. Refer to the two graphs on the right.<br/>
The graphs plot the continuous distribution function and the discrete kernel approximation. One thing to look out for are the tails of the distribution vs. kernel supportweight:<br/>
For the current configuration, we have 13.36% of the curve’s area outside the discrete kernel. Note that the weights are renormalized such that the sum of all weights is one. Or in other words:<br/>
the probability mass outside the discrete kernel is redistributed evenly to all pixels within the kernel. The weights are calculated by numerical integration of the continuous gaussian distribution<br/>
char *destFileName = argv[2];
#endif /* RUN_GROF RUN_GPROF */
if (showUsage)
To compile and run the program:
# Navigate to the directory you want to run the program in.
# Save [http://matrix.senecac.on.ca/~cpaul12/cinque_terre.bmp this] image and place it into the directory you will be running the program from.
# Copy the Linux version of the main source code above and paste it into a [your chosen file name].cpp file.
# Copy the Linux version of the header source code above and paste it into a file named windows.h.
To compile and run the program:
# Navigate to the directory you want to run the program in.
# Save [http://matrix.senecac.on.ca/~cpaul12/cinque_terre.bmp this] image and place it into the directory you will be running the program from.
# Copy the Linux version of the main source code above and paste it into a [your chosen file name].cpp file.
# Copy the Linux version of the header source code above and paste it into a file named windows.h.
for parallelization using CUDA. The sigma (&sigma;) and the kernel size can be increased in order to make the computation stressful on the GPU to get a significant benchmark.
= Assignment 2 /3 - Parallelize & Optimize =&#42; For gaussian blur we say it's unoptimized because we feel that there is more that can be done to reduce the execution times.<br/>&nbsp;&nbsp;The code displayed in the code snippets does use CUDA parallel constructs and fine tuning techniques such as streaming - async.
== Gaussian Blur ==
{| class="wikitable mw-collapsible mw-collapsed"
! Unoptimized * - BlurImage( ... )
|-
|
== Steps ==
=== Host Memory Management ===
In the original program a bmp is loaded into an vector of uint8_t. This is not ideal for CUDA, therefore an array of pinned memory was allocated. This array contains the same amount of elements but stores them as a structure, "BGRPixel" which is three contiguous floats. The vector is then transferred over to pinned memory. === Device Memory Management ===To get a blurred pixel the surrounding pixels must be sampled, in some cases this means sampling pixels outside the bounds of the image. In the original, a simple if check was used to determine if the pixel was outside the bounds or the image, if it was a black pixel was returned instead. This if statement most likely would have caused massive thread divergence in a kernel, therefore the images created in device memory featured additional padding of black pixels to compensate for this. Two such images were created, one to perform horizontal blur and one to perform vertical blur. Other small device arrays were also needed to store the Gaussian integrals that are used to produce the blurring effect.<br>
{| class="wikitable mw-collapsible mw-collapsed"
! Padding exampleHost Memory Management - Code( ... )
|-
| <syntaxhighlight lang="cpp">struct SImageData{ SImageData() : m_width(0) , m_height(0) { }
<div style="display:inline long m_width;">[[File:shrunk.png]] long m_height;</div> long m_pitch;<div style="display std:inline;">[[File:pad.png]]vector</divuint8_t>m_pixels;<br>This is how the image would be padded for 3x3 sigma blur. The original image is 2560x1600 -> 11.7MB With blur sigmas [x = 3, y = 3] and conversion to float the padded images will be 2600x1640 -> 48.8MB Increase of 4.1% pixels and with the conversion for uint8_t to float total increase of 317% in memory requirements on the GPU Since two padded images are needed at least 97.6MB will be on the GPU};
|struct BGRPixel { float b; float g; float r;};
=== Host to Device ===
To copy the pinned image to the device an array of streams was used to asynchronously copy each row of the image over. Doing so allowed the rows to be easily copied over while avoiding infringing on the extra padding pixels.
=== Kernels ===
First one image is blurred horizontally. One image is used as a reference while the other is written to. Kernels are also executed using the streams, so that each stream will blur a single row at a time. After the horizontal blur is finished the vertical blur is launched in the same manner, except that the previously written to image is used as a reference while the previous reference is now written to. The two blur are able to use the same kernel due to the fact that the pixel sampling technique works by iterating through pixels because of this the step size can be changed to sample across the row or down the column.
=== Device to Host ===
After that is done the image is copied back using the streams in the same way it was copied over.
void BlurImage(const SImageData& srcImage, SImageData &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize){ int xImage == Results ==[[File:uvso2srcImage.png]]m_width; // Width of image[[File:usession int yImage = srcImage.png]]m_height; // Height of image[[File:ktimes.png]] int imageSize = xImage*yImage;
int xPadded == Output Images ==[http:xImage + (xblursize - 1); //imgur.comWidth including padding int yPadded = yImage + (yblursize - 1); /a/CtMOc Image Gallery]Height including padding int paddedSize = xPadded*yPadded;
int xPad = xblursize / 2; // Number of padding columns on each side
int yPad = yblursize / 2;
int padOffset = xPadded*yPad + xPad; // Offset to first pixel in padded image
float* pinnedImage = nullptr; BGRPixel* d_padded1 = nullptr; BGRPixel* d_padded2 = nullptr;  // ...  // Allocate memory for host and device check(cudaHostAlloc((void**)&pinnedImage, 3 * imageSize * sizeof(float), 0)); check(cudaMalloc((void**)&d_padded1, paddedSize * sizeof(BGRPixel))); check(cudaMalloc((void**)&d_padded2, paddedSize * sizeof(BGRPixel)));  // Copy image to pinned memory for (int i = 0; i < 3 * imageSize; ++i) { pinnedImage[i] = (float)srcImage.m_pixels[i]; }  // ...}</syntaxhighlight> |} === Device Memory Management ===To get a blurred pixel the surrounding pixels must be sampled, in some cases this means sampling pixels outside the bounds of the image. In the original, a simple if check was used to determine if the pixel was outside the bounds or the image, if it was a black pixel was returned instead. This if statement most likely would have caused massive thread divergence in a kernel, therefore the images created in device memory featured additional padding of black pixels to compensate for this. Two such images were created, one to perform horizontal blur and one to perform vertical blur. Other small device arrays were also needed to store the Gaussian integrals that are used to produce the blurring effect.<br>{| class="wikitable mw-collapsible mw-collapsed"! Padding example|-|  <div style="display:inline;">[[File:shrunk.png]]</div><div style="display:inline;">[[File:pad.png]]</div><br>This is how the image would be padded for 3x3 sigma blur. The original image is 2560x1600 -> 11.7MB With blur sigmas [x = 3, y = 3] and conversion to float the padded images will be 2600x1640 -> 48.8MB Increase of 4.1% pixels and with the conversion for uint8_t to float total increase of 317% in memory requirements on the GPU Since two padded images are needed at least 97.6MB will be on the GPU |} === Host to Device ===To copy the pinned image to the device an array of streams was used to asynchronously copy each row of the image over. Doing so allowed the rows to be easily copied over while avoiding infringing on the extra padding pixels.=== Kernels ===First one image is blurred horizontally. One image is used as a reference while the other is written to. Kernels are also executed using the streams, so that each stream will blur a single row at a time. After the horizontal blur is finished the vertical blur is launched in the same manner, except that the previously written to image is used as a reference while the previous reference is now written to. The two blur are able to use the same kernel due to the fact that the pixel sampling technique works by iterating through pixels because of this the step size can be changed to sample across the row or down the column. === Device to Host ===After that is done the image is copied back using the streams in the same way it was copied over.=== Code === {| class="wikitable mw-collapsible mw-collapsed"! Unoptimized* - BlurImage -- Exert( ... )|-|<syntaxhighlight lang="cpp">const int ntpb = 1024;const int STREAMS = 32; void check(cudaError_t error) { if (error != cudaSuccess) { throw std::exception(cudaGetErrorString(error)); }} struct SImageData{ SImageData() : m_width(0) , m_height(0) { }  long m_width; long m_height; long m_pitch; std::vector<uint8_t> m_pixels;}; float Gaussian(float sigma, float x){ return expf(-(x*x) / (2.0f * sigma*sigma));} float GaussianSimpsonIntegration(float sigma, float a, float b){ return ((b - a) / 6.0f) * (Gaussian(sigma, a) + 4.0f * Gaussian(sigma, (a + b) / 2.0f) + Gaussian(sigma, b));} std::vector<float> GaussianKernelIntegrals(float sigma, int taps){ std::vector<float> ret; float total = 0.0f; for (int i = 0; i < taps; ++i) { float x = float(i) - float(taps / 2); float value = GaussianSimpsonIntegration(sigma, x - 0.5f, x + 0.5f); ret.push_back(value); total += value; } // normalize it for (unsigned int i = 0; i < ret.size(); ++i) { ret[i] /= total; } return ret;} struct BGRPixel { float b; float g; float r;}; __global__ void blur_kernel(BGRPixel* imageIn, BGRPixel* imageOut, float* blur, int n_blur, int x, int start, int jump) { int idx = blockDim.x*blockIdx.x + threadIdx.x; // Location on the row  if (idx < x) { int id = start + idx; int bstart = id - (n_blur / 2)*jump;  BGRPixel pixel{ 0.0f, 0.0f, 0.0f };  for (int i = 0; i < n_blur; ++i) { int bid = bstart + i*jump; float iblur = blur[i];  pixel.b += imageIn[bid].b * iblur; pixel.g += imageIn[bid].g * iblur; pixel.r += imageIn[bid].r * iblur; }  imageOut[id].b = pixel.b; imageOut[id].g = pixel.g; imageOut[id].r = pixel.r; }} void BlurImage(const SImageData& srcImage, SImageData &destImage, float xblursigma, float yblursigma, unsigned int xblursize, unsigned int yblursize){ int xImage = srcImage.m_width; // Width of image int yImage = srcImage.m_height; // Height of image int imageSize = xImage*yImage;  int xPadded = xImage + (xblursize - 1); // Width including padding int yPadded = yImage + (yblursize - 1); // Height including padding int paddedSize = xPadded*yPadded;  int xPad = xblursize / 2; // Number of padding columns on each side int yPad = yblursize / 2; int padOffset = xPadded*yPad + xPad; // Offset to first pixel in padded image  float* pinnedImage = nullptr; BGRPixel* d_padded1 = nullptr; BGRPixel* d_padded2 = nullptr;  float* d_xblur = nullptr; // XBlur integrals int n_xblur; // N  float* d_yblur = nullptr; // YBlur integrals int n_yblur; // N  // Allocate memory for host and device check(cudaHostAlloc((void**)&pinnedImage, 3 * imageSize * sizeof(float), 0)); check(cudaMalloc((void**)&d_padded1, paddedSize * sizeof(BGRPixel))); check(cudaMalloc((void**)&d_padded2, paddedSize * sizeof(BGRPixel)));  // Copy image to pinned memory for (int i = 0; i < 3 * imageSize; ++i) { pinnedImage[i] = (float)srcImage.m_pixels[i]; }  // Allocate and assign intergrals { auto row_blur = GaussianKernelIntegrals(xblursigma, xblursize); auto col_blur = GaussianKernelIntegrals(yblursigma, yblursize);  // ROW n_xblur = row_blur.size(); check(cudaMalloc((void**)&d_xblur, n_xblur * sizeof(float))); check(cudaMemcpy(d_xblur, row_blur.data(), n_xblur * sizeof(float), cudaMemcpyHostToDevice));  // COLUMN n_yblur = col_blur.size(); check(cudaMalloc((void**)&d_yblur, n_yblur * sizeof(float))); check(cudaMemcpy(d_yblur, col_blur.data(), n_yblur * sizeof(float), cudaMemcpyHostToDevice)); }   cudaStream_t stream[STREAMS];  int nblks = (xImage + (ntpb - 1)) / ntpb;  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamCreate(&stream[i])); }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { cudaMemcpyAsync(d_padded1 + padOffset + i*xPadded, pinnedImage + (3 * i*xImage), 3 * xImage * sizeof(float), cudaMemcpyHostToDevice, stream[j]); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded1, d_padded2, d_xblur, n_xblur, xImage, padOffset + i*xPadded, 1); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { blur_kernel << <nblks, ntpb, 0, stream[j] >> > (d_padded2, d_padded1, d_yblur, n_yblur, xImage, padOffset + i*xPadded, xPadded); } }  for (int i = 0; i < yImage;) { for (int j = 0; j < STREAMS && i < yImage; ++j, ++i) { check(cudaMemcpyAsync(pinnedImage + (3 * i*xImage), d_padded1 + padOffset + i*xPadded, xImage * sizeof(BGRPixel), cudaMemcpyDeviceToHost, stream[j])); } }  for (int i = 0; i < STREAMS; ++i) { check(cudaStreamSynchronize(stream[i])); check(cudaStreamDestroy(stream[i])); }  destImage.m_width = srcImage.m_width; destImage.m_height = srcImage.m_height; destImage.m_pitch = srcImage.m_pitch; destImage.m_pixels.resize(srcImage.m_pixels.size());  for (int i = 0; i < 3 * imageSize; i++) { destImage.m_pixels[i] = (uint8_t)pinnedImage[i]; };  check(cudaFree(d_xblur)); check(cudaFree(d_yblur));  check(cudaFreeHost(pinnedImage)); check(cudaFree(d_padded1)); check(cudaFree(d_padded2));  check(cudaDeviceReset());} </syntaxhighlight> |} == Results ==Obtained using Quadro K620<br>[[File:uvso2.png]][[File:usession.png]][[File:ktimes.png]]<br>Using a Quadro K2000<br>[[File:streams.png]] == Output Images ==[http://imgur.com/a/CtMOc Image Gallery][https://seneca-my.sharepoint.com/personal/jkraitberg_myseneca_ca/_layouts/15/guestaccess.aspx?docid=099a13c42168943b587de4b59e4634e06&authkey=Afl_iMqjNyFhoYu3bopOw5E 135MB Image][https://seneca-my.sharepoint.com/personal/jkraitberg_myseneca_ca/_layouts/15/guestaccess.aspx?docid=007880dac1dd74d09b74fc448dc3fac38&authkey=AdqHCKEjZCXzlyftjZWxFCA 135MB 3x3 Result] == Mandelbrot =={| class="wikitable mw-collapsible mw-collapsed"! Unoptimized - Mandelbrot( ... )|-|<syntaxhighlight lang="cpp">//C++ Includes#include <iostream>#include <complex>#include <vector>#include <chrono>#include <functional>#include <cuda_runtime.h> //CUDA Complex Numbers#include <cuComplex.h> //Helper Includes#include "window.h"#include "save_image.h"#include "utils.h" const int ntpb = 32; //Compute Color for each pixel__global__ void computeMandelbrot( int iter_max, int* d_colors, int fract_width, int fract_height, int scr_width, int scr_height, int fract_xmin, int fract_ymin){ int row = blockIdx.y * blockDim.y + threadIdx.y; //Row int col = blockIdx.x * blockDim.x + threadIdx.x; //Col  int idx = row * scr_width + col; //Pixel Index  if(col < scr_width && row < scr_height){  //Use Floating Complex Numbers to calculate color for each pixel int result = 0; cuFloatComplex c = make_cuFloatComplex((float)col, (float)row); cuFloatComplex d = make_cuFloatComplex(cuCrealf(c) / (float)scr_width * fract_width + fract_xmin , cuCimagf(c) / (float)scr_height * fract_height + fract_ymin); cuFloatComplex z = make_cuFloatComplex(0.0f, 0.0f);  while((cuCabsf(z) < 2.0f) && (result < iter_max)){ z = (cuCaddf(cuCmulf(z,z),d)); result++; } d_colors[idx] = result; //Output }} void mandelbrot(){ window<int> scr(0, 1000, 0, 1000); //Image Size window<float> fract(-2.2,1.2,-1.7,1.7); //Fractal Size int iter_max = 500; //Iterations const char* fname = "mandlebrot_gpu.png"; //Output File Name bool smooth_color = true; //Color Smoothing  int nblks = (scr.width() + ntpb - 1)/ ntpb; //Blocks std::vector<int> colors(scr.size()); //Output Vector //Allocate Device Memory int* d_colors; cudaMalloc((void**)&d_colors, scr.size() * sizeof(int));  //Grid Layout dim3 dGrid(nblks, nblks); dim3 dBlock(ntpb, ntpb);
//CUDA Complex NumbersExecute Kernel#include auto start = std::chrono::steady_clock::now(); computeMandelbrot<<<cuComplexdGrid, dBlock>>>(iter_max, d_colors, fract.width(), fract.height(), scr.width(), scr.height(), fract.h>x_min(), fract.y_min()); cudaDeviceSynchronize(); auto end = std::chrono::steady_clock::now();
//Helper IncludesOutput Time#include std::cout << "window.hTime to generate " << fname << "#include = "save_image<< std::chrono::duration <float, std::milli> (end - start).h"#include count() << "utils.h[ms]"<< std::endl;
const //Copy Data back to Host cudaMemcpy(colors.data(), d_colors, scr.size() * sizeof(int ntpb = 32), cudaMemcpyDeviceToHost);
//Compute Color for each pixelPlot Data and Free Memory__global__ void computeMandelbrot plot( int iter_maxscr, int* d_colorscolors, int fract_widthiter_max, int fract_height, int scr_width, int scr_height, int fract_xminfname, int fract_yminsmooth_color){ ; int row = blockIdx.y * blockDim.y + threadIdx.ycudaFree(d_colors); //Row int col = blockIdx.x * blockDim.x + threadIdx.x; //Col}
int idx = row * scr_width + col; //Pixel Index  ifmain(col < scr_width && row < scr_height){  //Use Floating Complex Numbers to calculate color for each pixel int result = 0; cuFloatComplex c = make_cuFloatComplex mandelbrot((float)col, (float)row); cuFloatComplex d = make_cuFloatComplex(cuCrealf(c) / (float)scr_width * fract_width + fract_xmin , cuCimagf(c) / (float)scr_height * fract_height + fract_ymin); cuFloatComplex z = make_cuFloatComplex(0.0f, return 0.0f) while((cuCabsf(z) < 2.0f) && (result < iter_max)){ z = (cuCaddf(cuCmulf(z,z),d)); result++; } d_colors[idx] = result; //Output }
}
</syntaxhighlight>
|}
void mandelbrot(){ window<int> scr(0, 1000, 0, 1000); //Image Size window<float> fract(-2.2,1.2,-1.7,1.7); //Fractal Size int iter_max = 500; //Iterations const char* fname = "mandlebrot_gpu.png"; //Output File Name bool smooth_color = true; //Color Smoothing  int nblks Objectives == (scr.width() + ntpb - 1)/ ntpb; //Blocks std::vector<int> colors(scr.size()); //Output Vector //Allocate Device Memory int* d_colors; cudaMalloc((void**)&d_colors, scr.size() * sizeof(int));  //Grid Layout dim3 dGrid(nblks, nblks); dim3 dBlock(ntpb, ntpb);  //Execute Kernel auto start = std::chrono::steady_clock::now(); computeMandelbrot<<<dGrid, dBlock>>>(iter_max, d_colors, fract.widthThe main objective was refactor the get_number_iterations(), fractfunction and the subsequent functions called that created the nested loops.height(), scr.width(), scr.height(), fract.x_min(), fractThe objective was met as all the functions were refactored into a single device function that did the calculation for a single pixel of the image.y_min()); cudaDeviceSynchronize(); auto end = std::chrono::steady_clock::now();  //Output Time std::cout << "Time to generate " << fname << " = " << std::chrono::duration <floatAs the original program was done with doubles, std::milli> (end - start).count() << " [ms]" << std::endl;  //Copy Data back all of the doubles were changed to Host cudaMemcpy(colorsfloats.data(), d_colors, scr.size() * sizeof(int), cudaMemcpyDeviceToHost);  //Plot Data and Free Memory plot(scr, colors, iter_max, fname, smooth_color); cudaFree(d_colors);}
int main(){ mandelbrot(); return 0;}</syntaxhighlight>|} == Objectives ==The main objective was refactor the get_number_iterations() function and the subsequent functions called that created the nested loops. The objective was met as all the functions were refactored into a single device function that did the calculation for a single pixel of the image. As the original program was done with doubles, all of the doubles were changed to floats. =Steps = Steps ==
=== Host Memory Management ===
After that is done the image is copied back using a single memcpy to the host.
=== Results ===
The program was compiled using clang++ , icpc (Intel Parallel Studio Compiler) and NVCC for the GPU. Runtimes for the standard clang++ version were extremely slow as the size of the resultant image increased. Compiling the program using the icpc compiler brought in significant changes without modifying any code and reduced runtimes drastically for running purely on a CPU. Using the parallel version based on CUDA improved the runtime massively over the clang++ compiled version and even the icpc version as more values could be calculated in parallel.
[[Image:Mandelbrot.png | 750px]]
=== Output Images ===
[http://imgur.com/a/R3ZAH Image Output]
=== Future Optimizations ===
As there isn't any data intensive tasks in this program, further optimizations would include creating streams of kernels and having them execute concurrently in order to improve runtime of the current solution.
 
= Assignment 3 - Optimize =
147
edits