Main Content

gpucoder.atomicXor

Atomically perform bit-wise XOR between a specified value and a variable in global or shared memory

Since R2021b

    Description

    example

    [A,oldA] = gpucoder.atomicXor(A,B) perform bit-wise XOR between B and the value of A in global or shared memory and writes the results back into A. The operation is atomic in a sense that the entire read-modify-write operation is guaranteed to be performed without interference from other threads. The order of the input and output arguments must match the syntax provided.

    Examples

    collapse all

    Perform a simple atomic addition operation by using the gpucoder.atomicXor function and generate CUDA® code that calls appropriate CUDA atomicXor() APIs.

    In one file, write an entry-point function myAtomicXor that accepts matrix inputs a and b.

    function a = myAtomicXor(a,b)
    
    coder.gpu.kernelfun;
    for i =1:numel(a)
        [a(i),~] = gpucoder.atomicXor(a(i), b);
    end
    
    end
    

    To create a type for a matrix of doubles for use in code generation, use the coder.newtype function.

    A = coder.newtype('uint32', [1 30], [0 1]);
    B = coder.newtype('uint32', [1 1], [0 0]);
    inputArgs = {A,B};
    

    To generate a CUDA library, use the codegen function.

    cfg = coder.gpuConfig('lib');
    cfg.GenerateReport = true;
    
    codegen -config cfg -args inputArgs myAtomicXor -d myAtomicXor
    

    The generated CUDA code contains the myAtomicXor_kernel1 kernel with calls to the atomicXor() CUDA APIs.

    //
    // File: myAtomicXor.cu
    //
    ...
    
    static __global__ __launch_bounds__(1024, 1) void myAtomicXor_kernel1(
        const uint32_T b, const int32_T i, uint32_T a_data[])
    {
      uint64_T loopEnd;
      uint64_T threadId;
    ...
    
      for (uint64_T idx{threadId}; idx <= loopEnd; idx += threadStride) {
        int32_T b_i;
        b_i = static_cast<int32_T>(idx);
        atomicXor(&a_data[b_i], b);
      }
    }
    ...
    
    void myAtomicXor(uint32_T a_data[], int32_T a_size[2], uint32_T b)
    {
      dim3 block;
      dim3 grid;
    ...
    
        cudaMemcpy(gpu_a_data, a_data, a_size[1] * sizeof(uint32_T),
                   cudaMemcpyHostToDevice);
        myAtomicXor_kernel1<<<grid, block>>>(b, i, gpu_a_data);
        cudaMemcpy(a_data, gpu_a_data, a_size[1] * sizeof(uint32_T),
                   cudaMemcpyDeviceToHost);
    ...
    
    }
    

    Input Arguments

    collapse all

    Operands, specified as scalars, vectors, matrices, or multidimensional arrays. Inputs A and B must satisfy the following requirements:

    • Have the same data type.

    • Have the same size or have sizes that are compatible. For example, A is an M-by-N matrix and B is a scalar or 1-by-N row vector.

    Data Types: int32 | uint32 | uint64

    Limitations

    • Function handle input to the gpucoder.stencilKernel pragma cannot contain calls to atomic functions. For example,

      out1 = gpucoder.stencilKernel(@myAtomicXor,A,[3 3],'same',B);
      

    Version History

    Introduced in R2021b