Tuesday 14 October 2014

Matrix-vector multiplication using shared memory

Matrix-vector multiplications, as well as matrix-matrix ones are essential for any sort numeric computations. GPGPUs alllow massive parallelisation of such operations. This post is about doing matrix-vector multiplications using CUDA with shared memory; a type of on-chip memory that is much faster than the global memory of the device - actually, shared memory is as high as 100 times faster than global memory provided that there are no bank conflicts between the threads. Algebraic operations such as matrix-vector multiplications can benefit a lot from the use of shared memory as we will show in this post.


The fine print:  Note that the code presented here is intended to demonstrate the use shared memory in CUDA and is, under no circumstances, optimised for high throughput and efficiency!
 

Le code - A first approach using statically allocated shared memory

Proper use of shared memory can lead to considerable acceleration of your application if used properly. Shared memory is an on-chip memory that is shared among the threads of a block. As mentioned in the CUDA C Programming Guide, Section 3.2.3, "Any opportunity to replace global memory accesses by shared memory accesses should therefore be exploited." In this post we illustrate how to allocate and use shared memory through a simple example, matrix-vector multiplication, and we discuss about static and dynamic allocation of shared memory.

In this post we assume that the values of our matrix are stored column-by-column, i.e., in column-major order. We will first assume that dimensions of our matrix are multiples of the block size we use and then we will see how this assumption can be lifted. Let us first take a look at the kernel that multiplies a matrix dA with a vector dx:

The procedure is illustrated in the figure below for the case where exact tiling is assumed, i.e., the row and column sizes of the matrix are multiples of the block size. This algorithm is based on the example presented in Section 3.2.3 of the CUDA C Programming Guide with a few necessary modifications.


The data are all contained in rand_data.h as __device__ variables, statically allocated (well, not too important).

In this example we have loaded the values of matrix A onto the shared memory, which is not necessary, but we did so just to illustrate how this can be achieved. Actually, the values of A are used only once, so loading them onto the shared memory is redundant. The kernel above can be launched as follows:


Externally allocated shared memory

In the previous example we used statically allocated shared memory assuming that the size of A_shared and x_shared is known at compile time. This has a few clear disadvantages especially if we need to run the same code using different block sizes. For this reason we need to replace the static allocation (lines 35 and 36 above) with dynamic allocation as follows:
 extern __shared__ T smem[]; // Externally allocated shared memory  
 T *x_shared = (T *)smem;  
 T *A_shared = (T *)&smem[block_size];  

and then, we launch the kernel allocating the shared memory variable smem like this:
 unsigned int shared_mem_bytes =   
   BLOCK_SIZE * (BLOCK_SIZE + 1) * sizeof(real_t);  
 matvec_kernel<real_t><<<dim_grid, dim_block, shared_mem_bytes>>>(  
   dev_ptr_A, dev_ptr_x, address_y);  


Inexact tiling

The assumption of exact tiling is a convenient, but too strong one. Having understood the algorithm presented in the previous sections we can now describe the method of inexact tiling where we use square matrices of size BLOCK_SIZE-by-BLOCK_SIZE to cover the whole matrix A (regardless of its dimensions). The concept is illustrated in the following figure:


To do so, we need to decide the number of horizontal and vertical blocks needed to cover the matrix. Integer division in C does the trick! Let nCols and nRows be the number of columns and rows of A respectively. Then, the number of horizontal blocks is (nCols+BLOCK_SIZE -1)/BLOCK_SIZE and the number of vertical blocks is (nCols+BLOCK_SIZE-1)/BLOCK_SIZE. Additionally, one needs to take into account that certain blocks don't lay fully inside matrix A, therefore, some of their threads need to remain idle (practically, they do absolutely nothing). For this purpose, we make use of guards which are simply if blocks. For instance:
  if (row < BLOCK_SIZE)  
 y_sub[row] = y_val;  

becomes:
 if (row < block_size && row + bid * block_size < nx_)  
    y_sub[row] = y_val;  

What is more, we now need to change the API of the function so as to pass to it the dimensions of the matrix (as device variables of course):
 template<typename T>  
 __global__ void matvec_kernel(  
           const T * dA,  
           const T * dx,  
           T * dy,  
           const unsigned int * nRows,  
           const unsigned int * nx)  

The new kernel runs as follows:
This kernel can be tested with this main function: Note that the dimension of the grid is now (host_nRows + BLOCK_SIZE -1)/ BLOCK_SIZE as explained above and that we are using externally allocated shared memory.


As I mentioned in the beginning, this code is not optimized. A lot of things can be done to improve its performance. I'll get back to you next week with a better version of the above implementation.

No comments:

Post a Comment