Difference between revisions of "Avengers"

From CDOT Wiki
Jump to: navigation, search
(Bruno)
(Assignment 2)
 
(38 intermediate revisions by 2 users not shown)
Line 2: Line 2:
 
= Team Avengers =
 
= Team Avengers =
 
== Team Members ==  
 
== Team Members ==  
# [mailto:bacremonese-de-morai@senecacollege.ca?subject=gpu610 Bruno Alexander Cremonese de Morais], Some responsibility
+
# [mailto:bacremonese-de-morai@senecacollege.ca?subject=gpu610 Bruno Alexander Cremonese de Morais], Pythagorean Triples
# ...
+
# [mailto:jsidhu26@senecacollege.ca?subject=gpu610 Jaideep Sidhu], Array Processing
[mailto:bacremonese-de-morai@senecacollege.ca?subject=gpu610 Email All]
 
  
 
== Progress ==
 
== Progress ==
 
=== Assignment 1 ===
 
=== Assignment 1 ===
==== Bruno ====
+
==== Bruno - Pythagorean Triples ====
Subject: [https://computing.llnl.gov/tutorials/parallel_comp/#ExamplesArray Array Processing by Blaise Barney]
 
  
==== Jaideep Sidhu ====
+
[https://www.daniweb.com/programming/software-development/threads/345155/calculating-the-pythagorean-triples Pythagorean Triples by Violet_82]
Subject:
+
 
 +
"A Pythagorean triple is a triple of positive integers a, b, and c such that a right triangle exists with legs a,b and hypotenuse c. By the Pythagorean theorem, this is equivalent to finding positive integers a, b, and c satisfying:
 +
 
 +
a^2+b^2=c^2.
 +
 
 +
The smallest and best-known Pythagorean triple is (a,b,c)=(3,4,5). The right triangle having these side lengths is sometimes called the 3, 4, 5 triangle. [...]
 +
 
 +
In addition, one side of every Pythagorean triple is divisible by 3, another by 4, and another by 5. One side may have two of these divisors, as in (8, 15, 17), (7, 24, 25), and (20, 21, 29), or even all three, as in (11, 60, 61)."
 +
 
 +
Weisstein, Eric W. "Pythagorean Triple." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PythagoreanTriple.html
 +
 
 +
This algorithm (supposedly) calculates all Pythagorean triple (not primitive Pythagorean triples) values from initial values of the opposed side of a triangle. The algorithm needs some logical improvement to display only values from the starting range, these improvements should not be hard to implement but it still serves the purpose to calculate all Pythagorean triples from the simplest up to the maximum hypotenuse number given by the user. The code has been slightly modified to receive command line parameters, facilitating its execution on UNIX enviroments.
 +
 
 +
The flat profile generated by the program is as follows:
 +
 
 +
Flat profile:
 +
 
 +
Each sample counts as 0.01 seconds.
 +
  %  cumulative  self              self    total         
 +
time  seconds  seconds    calls  Ts/call  Ts/call  name   
 +
97.67      5.98    5.98                            Triangle::calculateDimensions(double, double, double, int)
 +
  0.00      5.98    0.00    2842    0.00    0.00  Triangle::printDimensions(double, double, double)
 +
  0.00      5.98    0.00        1    0.00    0.00  _GLOBAL__sub_I__ZN8TriangleC2Edddi
 +
 
 +
We can see that the function calculateDimensions, which is called only once is responsible for almost all of the execution time. For this profiling the program calculated all triples from the most primitive (3, 4, 5) up to all the triples that satisfy the hypotenuse value of 1500. This function has O(n^3) with its nested for loops to calculate all possible values for each triple, demanding a lot of computational power to generate all of them.
 +
 
 +
Since every loop executes the same instructions I believe the parallelization of this code should be straightforward and would make it a lot faster. Calculations could be divided in tasks or the offload could be done in order since one calculation does not depend on the other explicitly.
 +
 
 +
For further utility, the results could be sorted for later display, since the values will be printed in any order from the GPU results.
 +
 
 +
[https://github.com/brucremo/DPS915 Code and Execution Instructions on GitHub]
 +
 
 +
==== Jaideep - Array Processing ====
 +
 
 +
In [https://computing.llnl.gov/tutorials/parallel_comp/#ExamplesArray Blaise Barney's Notes on Array Processing], an example of Array Processing is discussed. The example "demonstrates calculations on 2-dimensional array element; a function is evaluated on each array element."
 +
 
 +
I used the pseudo-code provided to create a program that creates a 2-dimensional array. The purpose of the program is to create and populate a 2-dimensional array of size n (provided by the user) with random numbers. In this case, the function that is applied to each element in the 2-dimensional array is 'std::rand()/100'. The code is available in the link below:
 +
 
 +
[https://github.com/jsidhu26/DPS915 Link to Array Processing code]
 +
 
 +
After using '''gprof''' to profile my program, a call graph is generated with this content:
 +
 
 +
[[File:Array_Processing_-_Call_Graph.jpeg]]
 +
 
 +
From the call graph file, it is evident that the generateRandom() function is an obvious hotspot as it is hogging 100% of the execution time. The function consists of 2 for loops, one nested in the other, which makes the function have a Big-O notation of O(n^2).
 +
 
 +
The computations involved with each element in the array is independent from the rest of the elements, and therefore this function is a deserving candidate for parallelization. Additionally, the array elements can be evenly distributed into sub-arrays and a process can be assigned to each sub-array.
  
 
=== Assignment 2 ===
 
=== Assignment 2 ===
 +
For Assignment 2, we decided to parallelize the application selected by Bruno.
 +
In the code, the function that took up a significant amount of time was the calculateDimensions() function. The flat profile indicates that this function takes 97.67% of the execution time.
 +
 +
Link to code on GitHub: https://github.com/brucremo/DPS915
 +
 +
==== Identifying Parallelize-able Code ====
 +
 +
calculateDimensions() has 3 nested for loops. Each for loop is used to set the value of one of the triangle sides. The inner-most for loop compares the two shorter sides of the triangle by first squaring them and then adding the squared results together. A condition is used to check if the sum of the squared side values is equivalent to the squared value of the hypotenuse. The results are printed when the condition is true.
 +
 +
[[File:NestedLoops.PNG]]
 +
 +
 +
The nested for loops represent the serial way of calculating the dimensions.
 +
 +
==== Offloading Process ====
 +
 +
To parallelize the code mentioned above, we did the following:
 +
 +
 +
1. Use CUDA device properties to design the grid and blocks.
 +
 +
[[File:cudaDevProps.PNG]]
 +
 +
 +
2. Adjust the number of threads to be used in the grid depending on the value passed in by the user (max hyptonuse value).
 +
 +
[[File:adjustNTPG.PNG]]
 +
 +
 +
3. Allocated 2 arrays on device, initialized from 1 to the maximum hypotenuse (given by the user as an argument):
 +
 +
* 1 array represents the hypotenuse side
 +
* 1 array represents one of the sides of the triangle
 +
* This was done by using CUDA's thrust library.
 +
 +
[[File:allocInit.PNG]]
 +
 +
 +
4. Calculated the number of blocks required and iterated through the thrust vector, passing each individual element to the kernel launch along with the two previously allocated arrays.
 +
 +
[[File:NBandLaunch.PNG]]
 +
 +
 +
5. The kernel contains the instructions for verifying whether the value passed in is part of a Pythagorean triple. If a Pythagorean triple is found, the values are printed out.
 +
 +
[[File:kernel.PNG]]
 +
 +
 +
==== Time Logging ====
 +
To compare the timings of the serial version and the parallel version, we modified the original file to have 2 functions: calculateCUDA() and calculateSerial(). The execution of both of these functions was timed to see which function was quicker.
 +
 +
[[File:Timings.PNG]]
 +
 +
 +
calculateSerial() contains the initial version of the application. It has 3 nested for loops and has a serialized approach to finding the Pythagorean triples. The time taken to find the triples is printed out after execution.
 +
 +
calculateCUDA() contains the parallelized version of the application. It sets the properties of a grid and its blocks, and launches a kernel to find the Pythagorean triples. The time taken to find the triples is printed out after execution.
 +
 +
==== Results ====
 +
Below is a graph that shows the time taken for execution of both the serial approach and the parallel approach.
 +
 +
[[File:TimeComparison.PNG]]
 +
 
=== Assignment 3 ===
 
=== Assignment 3 ===
 +
==== Using Shared Memory ====
 +
To optimize our code, we used shared memory inside the kernel. For our purposes, allocating arrays in the kernel using shared memory required a constant value for the number of threads per block. This meant that the number of threads per block could not be calculated at run time. Instead, we set the number of threads per block to 1024 and declared it as a constant in the beginning of the application. This allowed us to use shared memory inside the kernel and optimize our application.
 +
 +
Below is a picture of the optimized kernel:
 +
 +
[[File:QuickerKernel.PNG]]
 +
 +
 +
==== Results ====
 +
 +
On average, the run time for all 7 different problem sizes was reduced by approximately 216 milliseconds.
 +
 +
Below is a graph that illustrates the time saved on all 7 different problem sizes with the optimized kernel:
 +
 +
[[File:OptimizationGraph.PNG]]
 +
 +
==== Other Optimizations ====
 +
 +
Another optimization would be to optimize the original algorithm itself. The pow() function itself very expensive because it is software implemented. A better alternative would be to use hardware functions. In this case, it would be quicker to use the (*) operator and multiply the values with themselves as opposed to setting them to the power of 2.
 +
 +
[[File:OptimizedAlgorithm.PNG]]
 +
 +
After modifying the code to reflect this change, both the GPU and CPU methods are roughly equal to each other (in terms of execution time) when the value of n is 800.

Latest revision as of 23:49, 6 April 2019


GPU610/DPS915 | Student List | Group and Project Index | Student Resources | Glossary

Team Avengers

Team Members

  1. Bruno Alexander Cremonese de Morais, Pythagorean Triples
  2. Jaideep Sidhu, Array Processing

Progress

Assignment 1

Bruno - Pythagorean Triples

Pythagorean Triples by Violet_82

"A Pythagorean triple is a triple of positive integers a, b, and c such that a right triangle exists with legs a,b and hypotenuse c. By the Pythagorean theorem, this is equivalent to finding positive integers a, b, and c satisfying:

a^2+b^2=c^2. 	

The smallest and best-known Pythagorean triple is (a,b,c)=(3,4,5). The right triangle having these side lengths is sometimes called the 3, 4, 5 triangle. [...]

In addition, one side of every Pythagorean triple is divisible by 3, another by 4, and another by 5. One side may have two of these divisors, as in (8, 15, 17), (7, 24, 25), and (20, 21, 29), or even all three, as in (11, 60, 61)."

Weisstein, Eric W. "Pythagorean Triple." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PythagoreanTriple.html

This algorithm (supposedly) calculates all Pythagorean triple (not primitive Pythagorean triples) values from initial values of the opposed side of a triangle. The algorithm needs some logical improvement to display only values from the starting range, these improvements should not be hard to implement but it still serves the purpose to calculate all Pythagorean triples from the simplest up to the maximum hypotenuse number given by the user. The code has been slightly modified to receive command line parameters, facilitating its execution on UNIX enviroments.

The flat profile generated by the program is as follows:

Flat profile:

Each sample counts as 0.01 seconds.

 %   cumulative   self              self     total           
time   seconds   seconds    calls  Ts/call  Ts/call  name    
97.67      5.98     5.98                             Triangle::calculateDimensions(double, double, double, int)
 0.00      5.98     0.00     2842     0.00     0.00  Triangle::printDimensions(double, double, double)
 0.00      5.98     0.00        1     0.00     0.00  _GLOBAL__sub_I__ZN8TriangleC2Edddi

We can see that the function calculateDimensions, which is called only once is responsible for almost all of the execution time. For this profiling the program calculated all triples from the most primitive (3, 4, 5) up to all the triples that satisfy the hypotenuse value of 1500. This function has O(n^3) with its nested for loops to calculate all possible values for each triple, demanding a lot of computational power to generate all of them.

Since every loop executes the same instructions I believe the parallelization of this code should be straightforward and would make it a lot faster. Calculations could be divided in tasks or the offload could be done in order since one calculation does not depend on the other explicitly.

For further utility, the results could be sorted for later display, since the values will be printed in any order from the GPU results.

Code and Execution Instructions on GitHub

Jaideep - Array Processing

In Blaise Barney's Notes on Array Processing, an example of Array Processing is discussed. The example "demonstrates calculations on 2-dimensional array element; a function is evaluated on each array element."

I used the pseudo-code provided to create a program that creates a 2-dimensional array. The purpose of the program is to create and populate a 2-dimensional array of size n (provided by the user) with random numbers. In this case, the function that is applied to each element in the 2-dimensional array is 'std::rand()/100'. The code is available in the link below:

Link to Array Processing code

After using gprof to profile my program, a call graph is generated with this content:

Array Processing - Call Graph.jpeg

From the call graph file, it is evident that the generateRandom() function is an obvious hotspot as it is hogging 100% of the execution time. The function consists of 2 for loops, one nested in the other, which makes the function have a Big-O notation of O(n^2).

The computations involved with each element in the array is independent from the rest of the elements, and therefore this function is a deserving candidate for parallelization. Additionally, the array elements can be evenly distributed into sub-arrays and a process can be assigned to each sub-array.

Assignment 2

For Assignment 2, we decided to parallelize the application selected by Bruno. In the code, the function that took up a significant amount of time was the calculateDimensions() function. The flat profile indicates that this function takes 97.67% of the execution time.

Link to code on GitHub: https://github.com/brucremo/DPS915

Identifying Parallelize-able Code

calculateDimensions() has 3 nested for loops. Each for loop is used to set the value of one of the triangle sides. The inner-most for loop compares the two shorter sides of the triangle by first squaring them and then adding the squared results together. A condition is used to check if the sum of the squared side values is equivalent to the squared value of the hypotenuse. The results are printed when the condition is true.

NestedLoops.PNG


The nested for loops represent the serial way of calculating the dimensions.

Offloading Process

To parallelize the code mentioned above, we did the following:


1. Use CUDA device properties to design the grid and blocks.

CudaDevProps.PNG


2. Adjust the number of threads to be used in the grid depending on the value passed in by the user (max hyptonuse value).

AdjustNTPG.PNG


3. Allocated 2 arrays on device, initialized from 1 to the maximum hypotenuse (given by the user as an argument):

  • 1 array represents the hypotenuse side
  • 1 array represents one of the sides of the triangle
  • This was done by using CUDA's thrust library.

AllocInit.PNG


4. Calculated the number of blocks required and iterated through the thrust vector, passing each individual element to the kernel launch along with the two previously allocated arrays.

NBandLaunch.PNG


5. The kernel contains the instructions for verifying whether the value passed in is part of a Pythagorean triple. If a Pythagorean triple is found, the values are printed out.

Kernel.PNG


Time Logging

To compare the timings of the serial version and the parallel version, we modified the original file to have 2 functions: calculateCUDA() and calculateSerial(). The execution of both of these functions was timed to see which function was quicker.

Timings.PNG


calculateSerial() contains the initial version of the application. It has 3 nested for loops and has a serialized approach to finding the Pythagorean triples. The time taken to find the triples is printed out after execution.

calculateCUDA() contains the parallelized version of the application. It sets the properties of a grid and its blocks, and launches a kernel to find the Pythagorean triples. The time taken to find the triples is printed out after execution.

Results

Below is a graph that shows the time taken for execution of both the serial approach and the parallel approach.

TimeComparison.PNG

Assignment 3

Using Shared Memory

To optimize our code, we used shared memory inside the kernel. For our purposes, allocating arrays in the kernel using shared memory required a constant value for the number of threads per block. This meant that the number of threads per block could not be calculated at run time. Instead, we set the number of threads per block to 1024 and declared it as a constant in the beginning of the application. This allowed us to use shared memory inside the kernel and optimize our application.

Below is a picture of the optimized kernel:

QuickerKernel.PNG


Results

On average, the run time for all 7 different problem sizes was reduced by approximately 216 milliseconds.

Below is a graph that illustrates the time saved on all 7 different problem sizes with the optimized kernel:

OptimizationGraph.PNG

Other Optimizations

Another optimization would be to optimize the original algorithm itself. The pow() function itself very expensive because it is software implemented. A better alternative would be to use hardware functions. In this case, it would be quicker to use the (*) operator and multiply the values with themselves as opposed to setting them to the power of 2.

OptimizedAlgorithm.PNG

After modifying the code to reflect this change, both the GPU and CPU methods are roughly equal to each other (in terms of execution time) when the value of n is 800.