Open main menu

CDOT Wiki β

Changes

Savy Cat

15,081 bytes added, 22:14, 10 April 2018
Assignment 3
</nowiki>
==== Running Display Test Single Rotation ====
We can un-comment the "test" section in Rotate.cpp to read a .jpg, verify stored colour channel values are correct, and make sure the rotation is working as expected. Here is Tiny-Shay.jpg, 30px x 21px top-down image of my cat laying on the floor. Mousing over a pixel will display the X and Y coordinates, along with the corresponding red, green, blue values.
==== Dependencies ====
Figuring out how to use the CImg library in a parallel solution was fairly strait forward. In order to do so, I had to isolate any reference to CImg to it's own .cpp file. Trying to include the CImg library in the CUDA .cu file caused compilation errors. We use the function getImage defined in Imageimage.h and available to Rotate.cu in order to retrieve image data as a one dimensional float array. We can do the opposite and pass the float array back to Image.cpp for it to construct a CImg object and display the image to the screen (or use utilize any other CImg functionality).
Getting libjpeg to work, (the functionality of reading RGB pixel values from the .jpg file and storing them in a CImg object), took much longer to figure out. Linking the previous windows .lib build did not work, I suspect because our parallel version is being compiled in 64bit and libjpeg is 32bit. My first attempt (which did not work, so I would not recommend trying), was to replace libjpeg with [https://libjpeg-turbo.org/ turbo-jpeg], which is a 64bit library that overloads every libjpeg function so that it should be able to replace libjpeg functionality as-is, and is supposed to run faster due to optimization. By installing turbo-jpeg and moving jpeg62.dll to the project executable folder, I was able to get the solution to compile, however, it froze during run-time upon opening a .jpg file.
What finally did work was installing the windows 64bit version of [https://www.imagemagick.org/script/download.php#windows ImageMagick], and then removing the line of code "#define cimg_use_jpeg" which told CImg to use libjpeg. By default, it finds ImageMagick from it's default installation directory, and uses its functionality instead when initializing a CImg object from file. Oddly enough, I tried to use ImageMagick at the very beginning of the project, and could not get it to work, and thus used using libjpeginstead. Now for the CUDA version, it works. Either way, you will notice the pixel values themselves slightly different than in the first time run example. This simply shows that libjpeg and ImageMagick use different logic to determine colour values. ==== Initial CUDA Code ====This code will read .jpg filename given in the command line argument to CImg object, copy the float array to device, use the device to rotate the image by 90 degrees clockwise one time, then copy the result back to the host. It is just to verify everything is working as expected. We will then change the code to rotate the same images the same number of times as before.;image.h  <nowiki>// Evan Marinzel - DPS915 Project// image.h #pragma once #define PX_TYPE float PX_TYPE* getImage(char* filename, int &w, int &h);void display(const PX_TYPE* img, int h, int w);</nowiki> ;image.cpp  <nowiki>// Evan Marinzel - DPS915 Project// image.cpp #include <stdio.h>#include <iostream>#include <iomanip>#include "image.h"#include "CImg.h" // Indexing function for CImg object.// CImg[x][y][z]inline int idx(int x, int y, int w, int h, int z) { return x + y * w + w * h * z;} // Prints colour channel values of img to console.// Opens image, mouse-over pixels to verify indexing is correct.// Uses 40 x 40 pixel sample from the top left corner if img is larger than 40 x 40void display(const PX_TYPE* img, int w, int h) {  int height = h > 40 ? 40 : h; int width = w > 40 ? 40 : w; int size = w * h * 3;  for (int i = 0; i < 3; i++) { if (i == 0) std::cout << "Red:" << std::endl; else if (i == 1) std::cout << "Green:" << std::endl; else if (i == 2) std::cout << "Blue:" << std::endl; for (int j = 0; j < height; j++) { for (int k = 0; k < width; k++) { std::cout << std::setw(4) << (int)img[idx(k, j, w, h, i)]; } std::cout << std::endl; } std::cout << std::endl; }  cimg_library::CImg<PX_TYPE> cimg(w, h, 1, 3, 0); for (int i = 0; i < size; i++) { cimg[i] = img[i]; } cimg_library::CImg<PX_TYPE> imgCropped(cimg); imgCropped.crop(0, 0, width - 1, height - 1, 0); imgCropped.display(); } PX_TYPE* getImage(char* filename, int &w, int &h) {  std::cout << "Trying to read " << filename << std::endl; cimg_library::CImg<PX_TYPE> cimg(filename); std::cout << "Done reading " << filename << std::endl; w = cimg.width(); h = cimg.height(); int size = w * h * cimg.spectrum(); PX_TYPE* img = new PX_TYPE[size]; for (int i = 0; i < size; i++) { img[i] = cimg[i]; } return img;}</nowiki> ;rotate90.cu  <nowiki>// Evan Marinzel - DPS915 Project// Rotate.cu #include <iostream>#include <iomanip>#include "image.h"#include "cuda_runtime.h"#include "device_launch_parameters.h"  __global__ void rot90(PX_TYPE* src, PX_TYPE* dst, int src_w, int src_h, int z) { int k = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; if (k < src_w && j < src_h) dst[(src_h - 1 - j) + k * src_h + src_w * src_h * z] = src[threadIdx.x + threadIdx.y * src_w + src_w * src_h * z]; } int main(int argc, char** argv) {  if (argc != 2) { std::cerr << argv[0] << ": invalid number of arguments\n"; std::cerr << "Usage: " << argv[0] << " image.jpg\n"; return 1; }  // Retrieving cuda device properties int d; cudaDeviceProp prop; cudaGetDevice(&d); cudaGetDeviceProperties(&prop, d); unsigned ntpb = 32;  // Host and device array of pixel values for original (src) and rotated (dst) image PX_TYPE* h_src = nullptr; PX_TYPE* h_dst = nullptr; PX_TYPE* d_src = nullptr; PX_TYPE* d_dst = nullptr;  // Width and height of original image int w, h;  // Allocate host memory for source array, initialize pixel value array from .jpg file, and retrieve width and height. std::cout << "Opening image ..." << std::endl; h_src = getImage(argv[1], w, h); std::cout << "Opening image complete." << std::endl;  // Display 40x40px sample of h_src and print pixel values to console to verify .jpg loaded correctly std::cout << "Displaying h_src and printing color values to console ..." << std::endl; display(h_src, w, h);  // Allocate host memory for rotated version h_dst = new PX_TYPE[w * h * 3];  // Calculate block dimensions int nbx = (w + ntpb - 1) / ntpb; int nby = (h + ntpb - 1) / ntpb;  // Define block and grid dimensions dim3 dGrid(nbx, nby, 1); dim3 dBlock(ntpb, ntpb, 1);  // Print h_src dimensions and size to console std::cout << argv[1] << " Image Data" << std::endl; std::cout << std::setfill('=') << std::setw(strlen(argv[1]) + 11) << "=" << std::setfill(' ') << std::endl; std::cout << std::setw(17) << std::right << "Width: " << w << "px" << std::endl; std::cout << std::setw(17) << std::right << "Height: " << h << "px" << std::endl; std::cout << std::setw(17) << std::right << "Colour Channels: " << 3 << std::endl; std::cout << std::setw(17) << std::right << "Pixel Size: " << sizeof(PX_TYPE) << " bytes" << std::endl; std::cout << std::setw(17) << std::right << "Total Size: " << w * h * 3 * sizeof(PX_TYPE) << " bytes" << std::endl; std::cout << std::endl;  // Print grid details and total number of threads std::cout << "Number of blocks (x): " << nbx << std::endl; std::cout << "Number of blocks (y): " << nby << std::endl; std::cout << "Number of threads per block (x): " << ntpb << std::endl; std::cout << "Number of threads per block (y): " << ntpb << std::endl; std::cout << "Operations required for one colour channel: " << w * h << std::endl; std::cout << "Total threads available: " << ntpb * ntpb * nby * nbx << std::endl;  // Allocate device memory for src and dst std::cout << "Allocating device memory ..." << std::endl; cudaMalloc((void**)&d_src, w * h * sizeof(PX_TYPE) * 3); cudaMalloc((void**)&d_dst, w * h * sizeof(PX_TYPE) * 3); // Copy h_src to d_src std::cout << "Copying source image to device ..." << std::endl; cudaMemcpy(d_src, h_src, w * h * sizeof(PX_TYPE) * 3, cudaMemcpyHostToDevice);  // Launch grid 3 times (one grid per colour channel) std::cout << "Performing rotation ..." << std::endl; for (int i = 0; i < 3; i++) { rot90 << <dGrid, dBlock >> > (d_src, d_dst, w, h, i); }  // Ensure operations completed cudaDeviceSynchronize();  // Copy d_dst to h_dst std::cout << "Copying rotated image to host ..." << std::endl; cudaMemcpy(h_dst, d_dst, w * h * sizeof(PX_TYPE) * 3, cudaMemcpyDeviceToHost);  // Dealocate memory std::cout << "Dealocating device memory ..." << std::endl; cudaFree(d_src); cudaFree(d_dst); delete[] h_src; delete[] h_dst;  // Display 40x40px sample of h_dst and print pixel values to console to verify rotation worked std::cout << "Displaying h_dst and printing color values to console ..." << std::endl; display(h_dst, h, w);  return 0; }</nowiki> ==== Single Rotation ====Here we can verify the parallel solution reads the initial pixel values and applies the rotation correctly: [[File:Verify-3.png|800px]] After rotation: [[File:Verify-4.png|800px]] ==== The Rotation Operation ==== Grid dimensions and total number of threads are displayed before launching.  A single colour channel of Tiny-Shay.jpg only requires about half of one 32 x 32 block: [[File:Tiny-Shay-cuda.png]] Large-Shay.jpg required a grid of 102 x 77 blocks, each block containing 32 x 32 threads, allowing for 8042496 threads per colour channel: [[File:Large-Shay-cuda.png]] It was my design choice, for reasons of being able to wrap my head around the logic, to launch 3 two-dimensional grids per image, one per colour channel. It was my initial thought to launch a single grid and utilize the z member to mimic 3 dimensions. I should also try to accomplish this in a single grid to compare the results. Instead, we pass the current iteration (z) to use in calculating the correct location for single dimensional representation of the image:  <nowiki>// Launch grid 3 times (one grid per colour channel)std::cout << "Performing rotation ..." << std::endl;for (int i = 0; i < 3; i++) { rot90 << <dGrid, dBlock >> > (d_src, d_dst, w, h, i);}</nowiki> ==== Profiling With Nsight ====I edit rotate90.cu, removing the display function calls, and looping to rotate the given image 12 times as done in the CPU version. I copy the result of the rotation back to the host after each operation completes. I re-use the memory allocated on the device for each rotation, only allocating source and destination arrays once, then freeing memory after all 12 rotations are complete:  <nowiki> // Allocate device memory for src and dst std::cout << "Allocating device memory ..." << std::endl; cudaMalloc((void**)&d_src, w * h * sizeof(PX_TYPE) * 3); cudaMalloc((void**)&d_dst, w * h * sizeof(PX_TYPE) * 3); // Copy h_src to d_src std::cout << "Copying source image to device ..." << std::endl; cudaMemcpy(d_src, h_src, w * h * sizeof(PX_TYPE) * 3, cudaMemcpyHostToDevice);  // Rotate image 6 x 2 times, copying result back to host each time for (int r = 0; r < 6; r++) { std::cout << "Rotating 2x ..." << std::endl; // Launch grid 3 times (one grid per colour channel) for (int i = 0; i < 3; i++) { rot90 << <dGrid, dBlock >> > (d_src, d_dst, w, h, i); }  // Ensure operations completed cudaDeviceSynchronize();  // Copy d_dst to h_dst std::cout << "Copying result to host ..." << std::endl; cudaMemcpy(h_dst, d_dst, w * h * sizeof(PX_TYPE) * 3, cudaMemcpyDeviceToHost);  // Rotate again for (int i = 0; i < 3; i++) { rot90 << <dGrid, dBlock >> > (d_dst, d_src, h, w, i); }  // Ensure operations completed cudaDeviceSynchronize();  // Copy d_src to h_src cudaMemcpy(h_src, d_src, w * h * sizeof(PX_TYPE) * 3, cudaMemcpyDeviceToHost); std::cout << "Copying result to host ..." << std::endl; }  // Dealocate memory std::cout << "Dealocating memory ..." << std::endl; cudaFree(d_src); cudaFree(d_dst); delete[] h_src; delete[] h_dst;</nowiki> Here is the output from one run: [[File:Cuda-profilerun.png]] ;Device Usage %Tiny-Shay.jpg: 0.01% Medium-Shay.jpg: 0.39% Large-Shay.jpg: 0.93% Huge-Shay.jpg: 1.26% (36 kernel launches per run) ;Timeline ResultsFor each run, I list the 4 operations that took the most amount of time. For a tiny image, allocating source and destination variables on the device took the longest amount of time, but still, it took well under half a second. It took the same amount of time for every case however. Initializing the CImg variable from the .jpg file quickly became the biggest issue. This operation is CPU bound, and is dependent on the logic of ImageMagick. Copying the rotated image back to the host (cudaMemcpy) starts to become a hot spot as well between the large and huge sized image is a noticeable increase. [[File:Summary-2.png]] Comparing total run times of the CPU to the CUDA version shows a clear winner as .jpg files increase in size. Rotating Large-Shay.jpg (3264 x 2448) was '''3x''' faster, and Huge-Shay.jpg was '''4.95x''' faster. Tiny and Medium-Shay.jpg actually took longer using the CUDA version, but took less than half a second in both cases. [[File:Summary-3.png]] ;Conclusion So FarThe initial CUDA code had decent results. The overall device utilization percent seems fairly low. This may be since the device can handle far more threads than even Huge-Shay.jpg requires, or, we may be able to optimize code to utilize more of the device. In order to get better results during initializing from the .jpg file, I would need to investigate efficiencies in ImageMagick, CImg, or explore other methods of reading the image file. Wait time for the grid to execute is very low in all cases (.007 - .15 seconds). I should investigate the effects of different grid design, explore shared memory, and other methods of optimization.
=== Assignment 3 ===
==== "Register" Index ====
For my first attempt at optimization, I thought maybe, just maybe, the index calculations were being performed from within global memory:
 
<nowiki>
__global__ void rot90(PX_TYPE* src, PX_TYPE* dst, int src_w, int src_h, int z) {
int k = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (k < src_w && j < src_h)
dst[(src_h - 1 - j) + k * src_h + src_w * src_h * z] = src[threadIdx.x + threadIdx.y * src_w + src_w * src_h * z];
 
}</nowiki>
 
So I declared two register variables and determined the indexes prior:
 
<nowiki>
__global__ void rot90(PX_TYPE* src, PX_TYPE* dst, int src_w, int src_h, int z) {
int k = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
int d = (src_h - 1 - j) + k * src_h + src_w * src_h * z;
int s = threadIdx.x + threadIdx.y * src_w + src_w * src_h * z;
if (k < src_w && j < src_h)
dst[d] = src[s];
 
}</nowiki>
 
This only had a slightly negative effect. Although, such a small difference may have been due to the luck of the run:
 
[[File:Summary-4.png]]
 
==== Unsigned Char vs. Float ====
The first real improvement came from changing PX_TYPE from float back to unsigned char, as used in the serial version. Unsigned char is good enough for all .jpg colour values (255). GPUs are designed to perform operations on floating point numbers, however, we are not performing any calculations outside of the indexing. The performance of the kernel was the same for float or unsigned char. We copy the source image to device once, and back to the host 12 times, making size relevant.
 
{| class="wikitable"
|+Size Comparison
|-
|
|Unsigned Char
|Float
|-
|Tiny_Shay.jpg
|1.93 KB
|7.73 KB
|-
|Medium_Shay.jpg
|5.71 MB
|22.8 MB
|-
|Large_Shay.jpg
|22.8 MB
|91.4 MB
|-
|Huge_Shay.jpg
|91.4 MB
|365 MB
|}
 
This saves almost one second worth of latency for the largest file, bringing cudaMemcpy down to about the same time as the kernel execution:
 
[[File:Summary-5.png]]
 
==== Shared Memory ====
I could not think of how to utilize shared memory for this application. No calculations are being performed. Copying to shared memory would be an additional operation, as one write to global memory is required either way. By copying a small chunk of the source image to shared memory to improve read time, the indexing logic would no longer work.
 
==== Constant Memory ====
Utilizing constant memory for the source image was something I wanted to try. The largest unsigned char file of 91.4 MB seemed affordable, and we do not write to it.
 
Since it's required to use a constant value when declaring the size of the host variable, I needed to define the size of the largest file and use that for all files:
 
<nowiki>#define SRC_MAX_SIZE 95883264
 
__constant__ PX_TYPE d_src[SRC_MAX_SIZE];</nowiki>
 
Copy the actual number of elements over:
 
<nowiki>// Copy h_src to d_src
std::cout << "Copying source image to device ..." << std::endl;
cudaMemcpyToSymbol(d_src, h_src, w * h * sizeof(PX_TYPE) * 3);</nowiki>
 
Compiling gave an error saying 91MB is too much memory to use:
 
<nowiki>CUDACOMPILE : ptxas error : File uses too much global constant data (0x5b71000 bytes, 0x10000 max)</nowiki>
 
The only example file that fit and compiled was Tiny_Shay.jpg, which there is no point in improving.
93
edits