Allocating pinned memory in matlab mex with CUDA
3 views (last 30 days)
Show older comments
I have an application where I call my own CUDA fucntions from a mex. However, the memory transferred can be very big (both input and output) and that means that pinned memory can help me speed up the process quite a lot.
I have seen several posts in teh internet and in hre mentioning that you can not use pinned memory (cudaMallocHost) with MTALAB variables, however all these are from 2017 or older. Now that we are in 2019, and the parallel computing toolbox, CUDA and MATLAB have changed a lot, is this still true? Can pinned memory not be used still? For applications where memory is critical this is a big drawback.
Answers (0)
See Also
Categories
Find more on GPU Computing in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!