Using the GPU through the parallel comp. toolbox to optimize matrix inversion

15 views (last 30 days)
Hello,
In my code I impliment:
A=B\c;
B is an invertable matrix (14400x14400), c is a vector.
Since B is large this takes very long. I am using a windows 7, 64bit, 4 GB RAM, 2 cores.
Would I be able to shorten the time by using the GPU through the parallel comp. toolbox?
Would I need to write my own version of parallel matrix inversion, or has this problem been faced before and there is a posted solution?
Thank you.

Accepted Answer

Jill Reese
Jill Reese on 2 Oct 2012
As of MATLAB R2010b this functionality has been available on the GPU in the Parallel Computing Toolbox. In order to use it, at least one of your variables B and c must be transferred to the GPU before calling mldivide (\).
In the latest release of MATLAB (R2012b) you can use this code to solve your problem:
B=gpuArray(B); % transfer B to the GPU
c=gpuArray(c); % transfer c to the GPU
A = B \ c; % Solve the linear system on the GPU and store A on the GPU
% This line of your existing code doesn't change at all
% you can continue to perform work on the GPU using A, B, and c or
A=gather(A); % transfer A back to the MATLAB workspace

More Answers (1)

lior
lior on 3 Oct 2012
Great,
So mldivide knows how to exploit the GPU and run in parallel?
I guess that is true for every function in MATLAB, its being "parallelized" when I use it on "gpuArrey" variables without having to optimize it myself?
How about functions I write myself? if I write a simple "for" loop with basic math operators, then the GPU would not be used even if the variables are "gpuArrey"?
  1 Comment
Matt J
Matt J on 3 Oct 2012
If the operations done in each pass through the loop are element-wise, then you can use arrayfun to parallelize the loop on the GPU.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!