How to run a CUDA kernel on only one Streaming Multiprocessor (32 cores/threads) so that there can be perfect synchrony between them?

  Kiến thức lập trình

Goal

I would like to code a GPU kernel that allows perfect synchronization at various steps among all GPU cores processing it in parallel.

Background

My understanding of CUDA processing is the most NVIDIA devices are organized in “streaming multiprocessors” that each contain 32 cores. It is possible to attain synchronization between these 32 units on the same kernel, but not beyond those 32. If you run 32 threads on one 32 core streaming multiprocessor (one thread per core), however, these can be synced well.

Reference 1:
https://riptutorial.com/opencl/example/31156/threads–and-execution

A typical NVidia SM has 32 processing cores, thus its warp size is 32. The warp size is rather important when choosing the number of threads later on. All threads inside a single warp share a single instruction counter. That means those 32 threads are truly synchronized in that every thread executes every command at the same time.
Syncing threads is also not a simple matter. You can only sync threads within a single SM. Everything outside the SM is unsyncable from inside the kernel. You’ll have to write seperate kernels and launch them one after the other.

Reference 2:
https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/

When threads in a warp need to perform more complicated communications or collective operations than what the data exchange primitives provide, you can use the __syncwarp() primitive to synchronize threads in a warp. It is similar to the __syncthreads() primitive (which synchronizes all threads in the thread block) but at finer granularity.
The __syncwarp() primitive causes the executing thread to wait until all threads specified in mask have executed a __syncwarp() (with the same mask) before resuming execution. It also provides a memory fence to allow threads to communicate via memory before and after calling the primitive.

Idea:

It seems then if I can just be sure I keep my GPU kernel code optimized for no more than 32 threads/cores at a time, and use __syncwarp() within it any time I need all cores resynchronized before continuing, this will work for my needs.

Question:

I am wondering if there is anything I need to do to ensure that all my 32 threads are in the same “streaming multiprocessor” when I run the kernel and thus the __syncwarp() command will work.

I believe the kernel syntax is:

kernel_function<<<num blocks, threads per block, dynamic shared memory, stream ID>>>( ....arguments....)

Though I have found few good explanations of this syntax. I understand stream ID. Dynamic shared memory is not relevant here, so I think that can just be 0 from what I have seen.

What would I put for num blocks and threads per block? Would it be 1 blocks and 32 threads per block if I want to be able to have easy perfect thread synchronization? So:

kernel_function<<<1, 32, 0, streamID>>>( ....arguments....)

Is this correct? Will this always provide me 32 threads on 32 cores in one “streaming multiprocessor” that can be synchronized perfectly with __syncwarp()? Or is there something else to it?

Theme wordpress giá rẻ Theme wordpress giá rẻ Thiết kế website

LEAVE A COMMENT