Tips for more robust GPU programs using C++ and OpenCL

When I started programming for the GPU, I struggled to keep my GPU programs robust. This was partly because I was very new to C++ and C. OpenCL is very low level and this in combination with my inexperience led to a lot of not so robust prototypes. While my code worked for prototypes, it was very cumbersome to code and not very reusable.

In this post I will share some of the solutions I found to some of the issues I had. These deal with allocating and releasing memory on the GPU, as well as converting relatively cleanly between STL vector and GPU memory representation.

Simplifying release and allocation of memory

Basic memory allocating on the GPU using OpenCl is done by using the code below. Where memory is a reference to the memory on the GPU.

cl_mem memory = clCreateBuffer(context, CL_MEM_READ_WRITE, 
                               numberOfBytes,NULL, NULL);

To release the memory allocated, the below function has to be called with the memory object as argument.

clReleaseMemObject(memory);

This leads to several ways to leak GPU memory. Especially if your code throws exceptions or does something else which “looses” the reference to the memory object and you forgot to release the memory explicitly.

A way to mitigate this in C++ is to create an object which on initialisation allocates memory, and which when its destructor is called release the memory. This ensures that when the object goes out of scope or is deleted the memory on the GPU is also released. This is basically RAII (Resource Acquisition Is Initialization)

A very basic (leaving out code for copy constructors etc) object for GPU memory would look like this.

class GPUMemory {
private:
    cl_mem memory;
public:
    GPUMemory(size_t e,const cl_context &ctx) {
        memory = clCreateBuffer(ctx, CL_MEM_READ_WRITE, e,
                                NULL, NULL);
    }
    ~GPUMemory() {
        //release if out of scoped or deleted
        clReleaseMemObject(memory);
    }

    /*
     * Copy constructor, copy assignment operator etc goes here 
     * implementation should be according to your need 
     */
}

// Possible usage
void doAwesomeCalculation (size_t size,cl_context &context) {
       GPUMemory map(size,context);
       // Do calculation, throw exceptions, etc
} // Out of scope so GPU memory is released

This means that you do not have to explicitly release memory for every weird way your application could behave, which avoids a lot of code repetition and leak opportunities. As long as the memory in allocated in the correct scope you mostly do not have to think more about it. It should be noted that you should follow rule of threeΒ and implement copy constructor and assignment operator as well.Β The implementation may wary depending on how you want your object to behave. I’ll leave that for another blog post.

From vector to GPU memory

Vector is a very useful data structure when you want something sent to the GPU and back, since everything is stored adjacently in memory, no fetching or disbursing on the CPU side is needed. To convert a vector to the kind of memory object I designed above, this is the code i use:

template  GPUMemory createAndFillMemory(const vector data,
                                                    cl_context context,
                                                    cl_command_queue queue) {
   cl_int err;
   if(data.size() > 0) {
        GPUMemory mem = GPUMemory(data.size(),context);
        err = clEnqueueWriteBuffer(queue, mem.getMemory(), CL_TRUE, 0, mem.getBytes(),
                                       (void*)&data[0], 0, NULL, NULL);

        if(err != CL_SUCCESS) {
            throw runtime_error("Loading buffer failed");
        }
        return mem;
    }
    throw runtime_error("no content in vector");
}

As you can see the GPUMemory object has been extended to use templates, since the use I want in this case is a generic container for some struct. This saves a lot of code since you would need to do this conversion manually for each different struct you are planning to put on the GPU if you do not have a similar method. It also ensures that you do not make offset or type errors when converting to and from the different representations.

The GPUMemory class has also been extended to contain information about the amount of objects added, as well as type size.

Conversion back from GPUMemory object to a vector is done using this function:

template  vector readBackMemory(GPUMemory &data,
                                           cl_context context,
                                           cl_command_queue queue) {
   vector vec(data.getElements(),T());
   cl_int err;  
   if(data.getElements() > 0) {
        err = clEnqueueReadBuffer(queue, data.getMemory(), CL_TRUE, 0, data.getBytes(),
                                  &vec[0], 0, NULL, NULL);
        if(err != CL_SUCCESS) {
            checkError(err);
            throw runtime_error("Content not read back from GPU");
        }
        return vec;
    }
    throw runtime_error("No content in vector");
}

General conversion from a vector containing any object to GPU memory is not something I would recommend. While this is possible using the code above, conversion using these methods should be restricted to primitives and structs of fixed size. You should also avoid pointers, as it makes little sense to send a pointer pointing to objects in CPU memory to the GPU.

There is still room for mistakes as you will see by the following example. When you use memory on the GPU, no metadata carry over from the C++ code. So on the GPU side you are responsible for using the memory you transferred correctly, the methods above only ensure that the copying of memory back and forth from the GPU results in correctly typed and sized arrays.

An example

A simple example can be to copy some particles to the GPU, apply some calculation and then copy the particles back. On the CPU side I then define a particle struct:

struct GPUParticle{
    cl_float3 pos;
    cl_int identifier;
    cl_float3 direction;
};

On the GPU (OpenCL kernel) side, a simple kernel to be applied could look like this:

struct GPUParticle{
    float3 pos;
    int identifier;
    float3 direction;
};

__kernel void copyParticles(
                       global struct GPUParticle * data,
                       global struct GPUParticle * dataOut
                       )
{
    int x = get_global_id(0);
    //do calculation here
    dataOut[x] = data[x];
}

It is important to keep the structs on both sides similar in structure and byte size, so that there are no offset issues (you should run tests for this). This means that data is not type safe when sent and retrieved from the GPU, but I still think that partial type safety is much better then none at all, since this leave much fewer avenues for such errors to occur.

Finally to apply the kernel to the data I create this method:

vector Stuff::testCopy(vector &data) {
    GPUMemory mem = utils.createAndFillMemory(data, context, cmd_queue[0]);

    GPUMemory out = GPUMemory(mem.getElements(),context);

    size_t work_size[1];
    work_size[0] = mem.getElements();

    err  = clSetKernelArg(copyParticles,  0,
                          sizeof(cl_mem), mem.getMemoryRef());
    err |= clSetKernelArg(copyParticles,  1,
                          sizeof(cl_mem), out.getMemoryRef());

    if(err != CL_SUCCESS) {
        utils.checkError(err);
        runtime_error("Kernel setup failed");
    }

    // Apply kernel to data
    err = clEnqueueNDRangeKernel(cmd_queue[0], copyParticles,
                                 1, NULL, work_size, NULL, 0,
                                 NULL, NULL);
    if(err != CL_SUCCESS) {
        utils.checkError(err);
        runtime_error("Calculation failed");
    }
    return utils.readBackMemory(out,context, cmd_queue[0]);
}

There are of course problems where these techniques are not suitable or possible, but I use them a lot and they have made my GPU code much more robust. If you have suggestions for improvement or your own small snippets, please add a comment.

Some of my inspiration for this:
Bjarne Stroustrup Going native 2012

Published by

Jonas

Hacker and developer that loves to explore topics related to informatics, software engineering, mathematics, graphics, games, science.

Leave a Reply

Your email address will not be published. Required fields are marked *